Computer Networks and Distributed Systems
Mahdi Sattarivand
Volume 1, Issue 2 , May 2015, , Pages 9-14
Abstract
Abstract— Peer-to-Peer ( P2P ) systems have been the center of attention in recent years due to their advantage . Since each node in such networks can act both as a service provider and as a client , they are subject to different attacks . Therefore it is vital to manage confidence for these vulnerable ...
Read More
Abstract— Peer-to-Peer ( P2P ) systems have been the center of attention in recent years due to their advantage . Since each node in such networks can act both as a service provider and as a client , they are subject to different attacks . Therefore it is vital to manage confidence for these vulnerable environments in order to eliminate unsafe peers . This paper investigates the use of genetic programing for achieving trust of a peer without central monitoring . A model of confidence management is proposed here in which every peer ranks other peers according to calculated local confidence based on recommendations and previous interactions . The results show that this model identifies malicious nodes without the use of a central supervisor or overall confidence value and thus the system functions.Index Terms — peer - to - peer systems , confidence , genetic programing , malicious nodes .
Computer Architecture and Digital Systems
Babak Tavakoli; Mehdi Hosseinzadeh; Somayeh Jassbi
Volume 2, Issue 1 , February 2016, , Pages 9-16
Abstract
Residue Number System (RNS) is a non-weighted number system for integer number arithmetic, which is based on the residues of a number to a certain set of numbers called module set. The main characteristics and advantage of residue number system is reducing carry propagation in calculations. The elimination ...
Read More
Residue Number System (RNS) is a non-weighted number system for integer number arithmetic, which is based on the residues of a number to a certain set of numbers called module set. The main characteristics and advantage of residue number system is reducing carry propagation in calculations. The elimination of carry propagation leads to the possibility of maximizing parallel processing and reducing the delay. Residue number system is mostly fitted for calculations involving addition and multiplication. But some calculations and operations such as division, comparison between numbers, sign determination and overflow detection is complicated. In this paper a method for overflow detection is proposed for the special moduli set {2n-1,2n,2n+1} . This moduli set is favorable because of the ease of calculations in forward and reverse conversions. The proposed method is based on grouping the dynamic range into groups by using the New Chinese Theorem and exploiting the properties of residue differences. Each operand of addition is mapped into a group, then the sum of these groups is compared with the indicator and the overflow is detected. The proposed method can detect overflow with less delay comparing to previous methods.
Computer Networks and Distributed Systems
Minoo Soltanshahi
Volume 2, Issue 3 , August 2016, , Pages 9-14
Abstract
Cloud computing is the latest technology that involves distributed computation over the Internet. It meets the needs of users through sharing resources and using virtual technology. The workflow user applications refer to a set of tasks to be processed within the cloud environment. Scheduling algorithms ...
Read More
Cloud computing is the latest technology that involves distributed computation over the Internet. It meets the needs of users through sharing resources and using virtual technology. The workflow user applications refer to a set of tasks to be processed within the cloud environment. Scheduling algorithms have a lot to do with the efficiency of cloud computing environments through selection of suitable resources and assignment of workflows to them. Given the factors affecting their efficiency, these algorithms try to use resources optimally and increase the efficiency of this environment. The palbimm algorithm provides a scheduling method that meets the majority of the requirements of this environment and its users. In this article, we improved the efficiency of the algorithm by adding fault tolerance capability to it. Since this capability is used in parallel with task scheduling, it has no negative impact on the makespan. This is supported by simulation results in CloudSim environment.
Zahra Barati; Mahdi Jafari Shahbazzadeh; Vahid Khatibi Bardsiri
Volume 2, Issue 4 , November 2016, , Pages 9-16
Abstract
predicting the effort of a successful project has been a major problem for software engineers the significance of which has led to extensive investigation in this area. One of the main objectives of software engineering society is the development of useful models to predict the costs of software product ...
Read More
predicting the effort of a successful project has been a major problem for software engineers the significance of which has led to extensive investigation in this area. One of the main objectives of software engineering society is the development of useful models to predict the costs of software product development. The absence of these activities before starting the project will lead to various problems. Researchers focus their attention on determining techniques with the highest effort prediction accuracy or on suggesting new combinatory techniques for providing better estimates. Despite providing various methods for the estimation of effort in software projects, compatibility and accuracy of the existing methods is not yet satisfactory. In this article, a new method has been presented in order to increase the accuracy of effort estimation. This model is based on the type-2 fuzzy logic in which the gradient descend algorithm and the neuro-fuzzy-genetic hybrid approach have been used in order to teach the type-2 fuzzy system. In order to evaluate the proposed algorithm, three databases have been used. The results of the proposed model have been compared with neuro-fuzzy and type-1 fuzzy system. This comparison reveals that the results of the proposed model have been more favorable than those of the other two models.
Software Engineering and Information Systems
Vahid Khatibi Bardsiri; Mahboubeh Dorosti
Volume 2, Issue 2 , May 2016, , Pages 11-22
Abstract
One of important aspects of software projects is estimating the cost and time required to develop projects. Nowadays, this issue has become one of the key concerns of project managers. Accurate estimation of essential effort to produce and develop software is heavily effective on success or failure of ...
Read More
One of important aspects of software projects is estimating the cost and time required to develop projects. Nowadays, this issue has become one of the key concerns of project managers. Accurate estimation of essential effort to produce and develop software is heavily effective on success or failure of software projects and it is highly regarded as a vital factor. Failure to achieve convincing accuracy and little flexibility of current models in this field have attracted the attention of researchers in the last few years. Despite improvements to estimate effort, no agreement was obtained to select estimation model as the best one. One of effort estimation methods which is highly regarded is COCOMO. It is an extremely appropriate method to estimate effort. Although COCOMO was invented many years ago, it enjoys the effort estimation capability in software projects. Researchers have always attempted to improve the effort estimation capability in COCOMO through improving its structure. However, COCOMO results are not always satisfactory. The present study introduces a hybrid model for increasing the accuracy of COCOMO estimation. Combining bee colony algorithm with COCOMO estimation method, the proposed method obtained more efficient coefficient relative to the basic mode of COCOMO. Selecting the best coefficients maximizes the efficiency of the proposed method. The simulation results revealed the superiority of the proposed model based on MMRE and PRED(0.15).
Computer Networks and Distributed Systems
Farhad Rad; hadi pazhokhzadeh; hamid parvin
Volume 3, Issue 1 , February 2017, , Pages 11-18
Abstract
Nowadays, developed and developing countries using smart systems to solve their transportation problems. Parking guidance intelligent systems for finding an available parking space, are considered one of the architectural requirements in transportation. In this paper, we present a parking space reservation ...
Read More
Nowadays, developed and developing countries using smart systems to solve their transportation problems. Parking guidance intelligent systems for finding an available parking space, are considered one of the architectural requirements in transportation. In this paper, we present a parking space reservation method based on adaptive neuro-fuzzy system(ANFIS) and multi-objective genetic algorithm. In modeling of this system, final destination, searching time and cost of parking space have been used. Also, we use the vehicle ad-hoc network (VANET) and time series, for traffic flow predict and choose the best path. The benefits of the proposed system are declining searching time, average the walking and travel time. Evaluations have been performed by the MATLAB and we can see that the proposed method makes a good sum of best cost which is useful and meaningful in a parking space reserved for drivers and facility managers. The simulation results show that the performance and accuracy of the method have been significantly improved compared to previous works.
Pattern Analysis and Intelligent Systems
OLATUNJI HEZEKIAH ADIGUN; OLUSOLA JOEL OYEDELE
Volume 5, Issue 1 , February 2019, , Pages 11-18
Abstract
This paper employs Adaptive Neuro-Fuzzy Inference System (ANFIS) to predict water level that leads to flood in coastal areas. ANFIS combines the verbal power of fuzzy logic and numerical power of neural network for its action. Meteorological and astronomical data of Santa Monica, a coastal area in California, ...
Read More
This paper employs Adaptive Neuro-Fuzzy Inference System (ANFIS) to predict water level that leads to flood in coastal areas. ANFIS combines the verbal power of fuzzy logic and numerical power of neural network for its action. Meteorological and astronomical data of Santa Monica, a coastal area in California, U. S. A., were obtained. A portion of the data was used to train the ANFIS network, while other portions were used to check and test the generalization ability of the ANFIS model. Water level predictions were made for 24 hours, 48 hours and 72 hours, in which training, checking and testing of the model were performed for each of the prediction periods. The model results from the training, checking and testing data groups show that 48 hours prediction has the least Root Mean Square Error (RMSE) of 0.05426, 0.06298 and 0.05355 for training, checking and testing data groups respectively, showing that the prediction is most accurate for 48 hours.
Software Engineering and Information Systems
Behrouz Sadeghi; Vahid Khatibi Bardsiri; Monireh Esfandiari; Farzad Hosseinzadeh
Volume 1, Issue 4 , November 2015, , Pages 15-24
Abstract
One of the most important and valuable goal of software development life cycle is software cost estimation or SCE. During the recent years, SCE has attracted the attention of researchers due to huge amount of software project requests. There have been proposed so many models using heuristic and meta-heuristic ...
Read More
One of the most important and valuable goal of software development life cycle is software cost estimation or SCE. During the recent years, SCE has attracted the attention of researchers due to huge amount of software project requests. There have been proposed so many models using heuristic and meta-heuristic algorithms to do machine learning process for SCE. COCOMO81 is one of the most popular models for SCE proposed by Barry Boehm in 1981. However COCOMO81 is an old estimation model, it has been widely used for the purpose of cost estimation in its new forms. In this paper, the Imperialism Competition Algorithm (ICA) has been employed to tune the COCOMO81 parameters. Experimental results show that in the separated COCOMO81 dataset, ICA can estimate the COCOMO81 model parameters such that the performance parameters are significantly improved. The proposed hybrid model is flexible enough to tune the parameters for any data sets in form of COCOMO81.
Computer Networks and Distributed Systems
Alireza Enami; Javad Akbari Torkestani
Volume 7, Issue 1 , February 2021, , Pages 19-34
Abstract
Fog computing is being seen as a bridge between smart IoT devices and large scale cloud computing. It is possible to develop cloud computing services to network edge devices using Fog computing. As one of the most important services of the system, the resource allocation should always be available to ...
Read More
Fog computing is being seen as a bridge between smart IoT devices and large scale cloud computing. It is possible to develop cloud computing services to network edge devices using Fog computing. As one of the most important services of the system, the resource allocation should always be available to achieve the goals of Fog computing. Resource allocation is the process of distributing limited available resources among applications based on predefined rules. Because the problems raised in the resource management system are NP-hard, and due to the complexity of resource allocation, heuristic algorithms are promising methods for solving the resource allocation problem. In this paper, an algorithm is proposed based on learning automata to solve this problem, which uses two learning automata: a learning automata is related to applications (LAAPP) and the other is related to Fog nodes (LAN). In this method, an application is selected from the action set of LAAPP and then, a Fog node is selected from the action set of LAN. If the requirements of deadline, response time and resources are met, then the resource will be allocated to the application. The efficiency of the proposed algorithm is evaluated through conducting several simulation experiments under different Fog configurations. The obtained results are compared with several existing methods in terms of the makespan, average response time, load balancing and throughput.
Computer Architecture and Digital Systems
Ali Ramezanzad; Midia Reshadi
Volume 4, Issue 2 , May 2018, , Pages 61-68
Abstract
Nowadays, the growing demand for supporting multiple applications causes to use multiple IPs onto the chip. In fact, finding truly scalable communication architecture will be a critical concern. To this end, the Networks-on-Chip (NoC) paradigm has emerged as a promising solution to on-chip communication ...
Read More
Nowadays, the growing demand for supporting multiple applications causes to use multiple IPs onto the chip. In fact, finding truly scalable communication architecture will be a critical concern. To this end, the Networks-on-Chip (NoC) paradigm has emerged as a promising solution to on-chip communication challenges within the silicon-based electronics. Many of today’s NoC architectures are based on grid-like topologies which are also used in application-specific design.The small world network idea recently has been introduced in order to optimize the performance of the Networks-on-chip. Based on this method the architecture will be neither fully customized nor completely regular. Results have shown that by using the long-range links which optimized the network power and performance, the area consumption will exceed. We can derive from this that an acceptable bound on the area consumption should be considered. Based on the restriction of a designer, in this paper we want to present a methodology that will automatically optimize an architecture while at the same time considering the area consumption.
Pattern Analysis and Intelligent Systems
Masoud Barkhan; Fattah Alizadeh; Vafa Maihami
Volume 5, Issue 2 , May 2019, , Pages 71-80
Abstract
For many years, speech has been the most natural and efficient means of information exchange for human beings. With the advancement of technology and the prevalence of computer usage, the design and production of speech recognition systems have been considered by researchers. Among this, lip-reading ...
Read More
For many years, speech has been the most natural and efficient means of information exchange for human beings. With the advancement of technology and the prevalence of computer usage, the design and production of speech recognition systems have been considered by researchers. Among this, lip-reading techniques encountered with many challenges for speech recognition, that one of the challenges being the noise in some situations, which is the main cause of errors in the correct diagnosis of speech. One of the ways for solving this problem is image processing, that in this study, the purpose has been designing and implementing a system for automatic recognition of Persian letters through image-processing techniques. For this purpose, after providing a database for Persian verbal phonetics, we first used image processing techniques to eliminate the presence of noises and detect the cantor in lip, in which we used edge detection to identify the edges of the lip. After finding the upper and lower points of the lip for five frames of each film, we used the mean gap between the upper and lower points of the lip as the characteristic of each phoneme and then by providing a database of these features, with the help of the back propagation artificial neural network and The radial basis function have categorized these phonemes, which ultimately achieved the desired results in the categorization of the phonemes. Of course, the precision of classification using the back propagation artificial neural network has been more than radial basis function ANN.
Computer Networks and Distributed Systems
Bahareh Rahmati; Amir Masoud Rahmani; Ali Rezaei
Volume 3, Issue 2 , May 2017, , Pages 75-80
Abstract
Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, ...
Read More
Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, accessing data files is critical for performing such applications. Sometimes accessing data becomes a bottleneck for the whole cloud workflow system and decreases the performance of the system dramatically. Job scheduling and data replication are two important techniques which can enhance the performance of data-intensive applications. It is wise to integrate these techniques into one framework for achieving a single objective. In this paper, we integrate data replication and job scheduling with the aim of reducing response time by reduction of data access time in cloud computing environment. This is called data replication-based scheduling (DRBS). Simulation results show the effectiveness of our algorithm in comparison with well-known algorithms such as random and round-robin.
Computer Networks and Distributed Systems
Ghazaal Emadi; Amir Masoud Rahmani; Hamed Shahhoseini
Volume 3, Issue 3 , August 2017, , Pages 135-144
Abstract
The cloud computing is considered as a computational model which provides the uses requests with resources upon any demand and needs.The need for planning the scheduling of the user's jobs has emerged as an important challenge in the field of cloud computing. It is mainly due to several reasons, including ...
Read More
The cloud computing is considered as a computational model which provides the uses requests with resources upon any demand and needs.The need for planning the scheduling of the user's jobs has emerged as an important challenge in the field of cloud computing. It is mainly due to several reasons, including ever-increasing advancements of information technology and an increase of applications and user needs for these applications with high quality, as well as, the popularity of cloud computing among user and rapidly growth of them during recent years. This research presents the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm in the field of optimization for tasks scheduling in the cloud computing environment. The findings indicate that presented algorithm, led to a reduction in execution time of all tasks, compared to SPT, LPT, and RLPT algorithms.Keywords: Cloud Computing, Task Scheduling, Virtual Machines (VMs), Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Software Engineering and Information Systems
Negin Bagheri Renani; Elham Yaghoubi
Volume 4, Issue 3 , August 2018, , Pages 143-154
Abstract
Due to the increasing growth of processing cores in complex computational systems, all the connection converted bottleneck for all systems. With the protection of progressing and constructing complex photonic connection on chip, optical data transmission is the best choice for replacing with electrical ...
Read More
Due to the increasing growth of processing cores in complex computational systems, all the connection converted bottleneck for all systems. With the protection of progressing and constructing complex photonic connection on chip, optical data transmission is the best choice for replacing with electrical interconnection for the reason of gathering connection with a high bandwidth and insertion loss on chip was mentioned. Optical routers play an important role in the Optical Network-on-Chip (ONoC), which are responsible for selecting the path between optical signal source and the destination. In recent years, silicon optical routers based on Micro-Ring Resonators (MRRs) and Mach-Zehnder Interferometers (MZIs) have been proposed. The design of optical switches is desirable by using of Mach-Zehnder Interferometer. This is while that Micro Ring Resonator Switches have low bandwidth, whereas Mach-Zehnder Interferometer switches have wide bandwidth inherently. Mach-Zehnder Interferometer switches are able to routing with high speed for data transmission with Nano second switching time. This is while, that MRR switches in compare to MZIs has the less power consumption and area consumption. On the other hand we can divide optical routers into parts, A. general router and B. specific- router, so that in specific routers, some of I/O paths for the reason of avoiding deadlock had be omitted. In continue, several kinds of optical router based on MZI and MRR along with researching a series of parameters was mentioned.
Computer Networks and Distributed Systems
Azam Seilsepour; Reza Ravanmehr; Hamid Reza Sima
Volume 5, Issue 3 , August 2019, , Pages 143-160
Abstract
Big data analytics is one of the most important subjects in computer science. Today, due to the increasing expansion of Web technology, a large amount of data is available to researchers. Extracting information from these data is one of the requirements for many organizations and business centers. In ...
Read More
Big data analytics is one of the most important subjects in computer science. Today, due to the increasing expansion of Web technology, a large amount of data is available to researchers. Extracting information from these data is one of the requirements for many organizations and business centers. In recent years, the massive amount of Twitter's social networking data has become a platform for data mining research to discover facts, trends, events, and even predictions of some incidents. In this paper, a new framework for clustering and extraction of information is presented to analyze the sentiments from the big data. The proposed method is based on the keywords and the polarity determination which employs seven emotional signal groups. The dataset used is 2077610 tweets in both English and Persian. We utilize the Hive tool in the Hadoop environment to cluster the data, and the Wordnet and SentiWordnet 3.0 tools to analyze the sentiments of fans of Iranian athletes. The results of the 2016 Olympic and Paralympic events in a one-month period show a high degree of precision and recall of this approach compared to other keyword-based methods for sentiment analysis. Moreover, utilizing the big data processing tools such as Hive and Pig shows that these tools have a shorter response time than the traditional data processing methods for pre-processing, classifications and sentiment analysis of collected tweets.
Computer Networks and Distributed Systems
Elaheh Radmehr; HASSAN SHAKERI
Volume 3, Issue 4 , November 2017, , Pages 189-194
Abstract
Wireless sensor networks have been widely considered as one of the most important 21th century technologies and are used in so many applications such as environmental monitoring, security and surveillance. Wireless sensor networks are used when it is not possible or convenient to supply signaling or ...
Read More
Wireless sensor networks have been widely considered as one of the most important 21th century technologies and are used in so many applications such as environmental monitoring, security and surveillance. Wireless sensor networks are used when it is not possible or convenient to supply signaling or power supply wires to a wireless sensor node. The wireless sensor node must be battery powered.Coverage and network lifetime are major problems in WSNs so in order to address this difficulty we propose a combinational method consists of fuzzy-logic and genetic algorithms. The proposed scheme detects the coverage holes in the network and selects the most appropriate hole's neighbor to move towards the blank area and compensate the coverage loss with fuzzy-logic contribution and above node new coordinate is determined by genetic algorithm. As fuzzy-logic will be so effective if more than one factor influence on decision making and also genetic algorithms perform well in dynamic problems so our proposed solution results in fast, optimized and reliable output
Pattern Analysis and Intelligent Systems
Lida Shahmiri; Sajad Tavassoli; Seyed Navid Hejazi Jouybari
Volume 5, Issue 4 , November 2019, , Pages 213-220
Abstract
vehicle detection and classification of vehicles play an important role in decision making for the purpose of traffic control and management.this paper presents novel approach of automating detecting and counting vehicles for traffic monitoring through the usage of background subtraction and morphological ...
Read More
vehicle detection and classification of vehicles play an important role in decision making for the purpose of traffic control and management.this paper presents novel approach of automating detecting and counting vehicles for traffic monitoring through the usage of background subtraction and morphological operators. We present adaptive background subtraction that is compatible with weather and lighting changes. Among the various challenges involved in the background modeling process, the challenge of overcoming lighting scene changes and dynamic background modeling are the most important issues. The basic architecture of our approach is done in 3 steps: 1-background subtraction 2- segmentation module 3- detection of objects and counting vehicles. We present an adaptive background at each frame after using binary motion mask to create instantaneous image of background. To remove noises we use morphological operators and then start to segment images, detect vehicles and count them. Algorithm is efficient and able to run in real-time. Some experimental results and conclusions are presented
Pattern Analysis and Intelligent Systems
Minakshi Boruah
Volume 4, Issue 4 , November 2018, , Pages 219-228
Abstract
Biometric recognition is an automatic identification method which is based on unique features or characteristics possessed by human beings and Iris recognition has proved itself as one of the most reliable biometric methods available owing to the accuracy provided by its unique epigenetic patterns. The ...
Read More
Biometric recognition is an automatic identification method which is based on unique features or characteristics possessed by human beings and Iris recognition has proved itself as one of the most reliable biometric methods available owing to the accuracy provided by its unique epigenetic patterns. The main steps in any iris recognition system are image acquisition, iris segmentation, iris normalization, feature extraction and features matching. EER (Equal Error Rate) metric is considered the best metric for evaluating an iris recognition system.In this paper, different parameters viz. the scaling factor to speed up the CHT (Circle Hough Transform), the sigma for blurring with Gaussian filter while detecting edges, the radius for weak edge suppression for the edge detector used during segmentation and the gamma correction factor for gamma correction; the central wavelength for convolving with Log-Gabor filter and the sigma upon central frequency during feature extraction have been thoroughly tested and evaluated over the CASIA-IrisV1 database to get an improved parameter set. This paper demonstrates how the parameters must be set to have an optimized Iris Recognition System.
Pattern Analysis and Intelligent Systems
Marzieh Faridi Masouleh
Volume 4, Issue 1 , February 2018, , Pages 13-20
Abstract
Business intelligent (BI) technologies have been adopted by different types of organizations. The banking sector is among the service industry that has been largely influenced by technology currently. This has been manifested in the way the operations of banking have evolved from the pure exchange of ...
Read More
Business intelligent (BI) technologies have been adopted by different types of organizations. The banking sector is among the service industry that has been largely influenced by technology currently. This has been manifested in the way the operations of banking have evolved from the pure exchange of cheques, cash, as well as other negotiable platforms to the application of IT (information Technology) to transact business in this service industry. The study conducted on impacts of business technologies adoption among Iranian Banks revealed that the adoption has made banking industry in Iran to be competitive and have improved operational efficiencies. However, in terms of Risk reduction, BI technologies if not used appropriately it can lead to the downfall of these banks. BI solutions allow banking industry in Iran to use the available data to exploit the competitive advantage as well as have an improved understanding of the demands and needs of customers by facilitating effective communication.
Pattern Analysis and Intelligent Systems
Sahar Rahmatian; Reza Safabakhsh
Volume 1, Issue 2 , May 2015, , Pages 15-22
Abstract
Multiple people detection and tracking is a challenging task in real-world crowded scenes. In this paper, we have presented an online multiple people tracking-by-detection approach with a single camera. We have detected objects with deformable part models and a visual background extractor. In the tracking ...
Read More
Multiple people detection and tracking is a challenging task in real-world crowded scenes. In this paper, we have presented an online multiple people tracking-by-detection approach with a single camera. We have detected objects with deformable part models and a visual background extractor. In the tracking phase we have used a combination of support vector machine (SVM) person-specific classifiers, similarity scores, the Hungarian algorithm and inter-object occlusion handling. Detections have been used for training person-specific classifiers and to help guide the trackers by computing a similarity score based on them and spatial information and assigning them to the trackers with the Hungarian algorithm. To handle inter-object occlusion we have used explicit occlusion reasoning. The proposed method does not require prior training and does not impose any constraints on environmental conditions. Our evaluation showed that the proposed method outperformed the state of the art approaches by 10% and 15% or achieved comparable performance.
Pattern Analysis and Intelligent Systems
Farzaneh Famoori; Vahid Khatibi bardsiri; Shima Javadi Moghadam; Fakhrosadat Fanian
Volume 2, Issue 3 , August 2016, , Pages 15-26
Abstract
One of the most important aspects of software project management is the estimation of cost and time required for running information system. Therefore, software managers try to carry estimation based on behavior, properties, and project restrictions. Software cost estimation refers to the process of ...
Read More
One of the most important aspects of software project management is the estimation of cost and time required for running information system. Therefore, software managers try to carry estimation based on behavior, properties, and project restrictions. Software cost estimation refers to the process of development requirement prediction of software system. Various kinds of effort estimation patterns have been presented in recent years, which are focused on intelligent techniques. This study made use of clustering approach for estimating required effort in software projects. The effort estimation is carried out through SWR (StepWise Regression) and MLR (Multiple Linear Regressions) regression models as well as CART (Classification And Regression Tree) method. The performance of these methods is experimentally evaluated using real software projects. Moreover, clustering of projects is applied to the estimation process. As indicated by the results of this study, the combination of clustering method and algorithmic estimation techniques can improve the accuracy of estimates.
Computer Networks and Distributed Systems
Somayeh Taherian Dehkordi; Vahid Khatibi Bardsiri
Volume 1, Issue 3 , August 2015, , Pages 17-22
Abstract
Since software systems play an important role in applications more than ever, the security has become one of the most important indicators of softwares.Cloud computing refers to services that run in a distributed network and are accessible through common internet protocols. Presenting a proper scheduling ...
Read More
Since software systems play an important role in applications more than ever, the security has become one of the most important indicators of softwares.Cloud computing refers to services that run in a distributed network and are accessible through common internet protocols. Presenting a proper scheduling method can lead to efficiency of resources by decreasing response time and costs. This research studies the existing approaches of task scheduling and resource allocation in cloud infrastructures and assessment of their advantages and disadvantages. Afterwards, a compound algorithm is presented in order to allocate tasks to resources properly and decrease runtime. In this paper we proposed a new method for task scheduling by learning automata (LA). This method where has named RAOLA is trained by historical information of task execution on the cloud, then divide task to many classes and evaluate them. Next, manage virtual machine for capture physical resources at any period based on rate of task classes, such that improve efficiency of cloud network.
Computer Architecture and Digital Systems
Mehrdad Poorhosseini
Volume 2, Issue 1 , February 2016, , Pages 17-26
Abstract
Quantum dot Cellular Automata (QCA) is one of the important nano-level technologies for implementation of both combinational and sequential systems. QCA have the potential to achieve low power dissipation and operate high speed at THZ frequencies. However large probability of occurrence fabrication defects ...
Read More
Quantum dot Cellular Automata (QCA) is one of the important nano-level technologies for implementation of both combinational and sequential systems. QCA have the potential to achieve low power dissipation and operate high speed at THZ frequencies. However large probability of occurrence fabrication defects in QCA, is a fundamental challenge to use this emerging technology. Because of these various defects, it is necessary to obtain exhaustive recognition about these defects. In this paper a complete survey of different QCA faults are presented first. Then some techniques to improve fault tolerance in QCA circuits explained. The effects of missing cell as an important fault on XOR gate that is one of important basic building block in QCA technology is then discussed by exhaustive simulations. Improvement technique is then applied to these XOR structures and then structures are resimulated to measure their fault tolerance improvement due to using these fault tolerance technique. The result show that different QCA XOR gates have different sensitivity against this fault. After using improvement technique, the tolerance of XOR gates have been increased, furthermore in terms of sensitivity against this defect XORs show similar behavior that indicate the effectiveness of improvement have been made.
Computer Networks and Distributed Systems
Avishan Sharafi; Ali Rezaee
Volume 2, Issue 4 , November 2016, , Pages 17-30
Abstract
Hadoop MapReduce framework is an important distributed processing model for large-scale data intensive applications. The current Hadoop and the existing Hadoop distributed file system’s rack-aware data placement strategy in MapReduce in the homogeneous Hadoop cluster assume that each node in a ...
Read More
Hadoop MapReduce framework is an important distributed processing model for large-scale data intensive applications. The current Hadoop and the existing Hadoop distributed file system’s rack-aware data placement strategy in MapReduce in the homogeneous Hadoop cluster assume that each node in a cluster has the same computing capacity and a same workload is assigned to each node. Default Hadoop doesn’t consider load state of each node in distribution input data blocks, which may cause inappropriate overhead and reduce Hadoop performance, but in practice, such data placement policy can noticeably reduce MapReduce performance and may increase extra energy dissipation in heterogeneous environments. This paper proposes a resource aware adaptive dynamic data placement algorithm (ADDP) .With ADDP algorithm, we can resolve the unbalanced node workload problem based on node load status. The proposed method can dynamically adapt and balance data stored on each node based on node load status in a heterogeneous Hadoop cluster. Experimental results show that data transfer overhead decreases in comparison with DDP and traditional Hadoop algorithms. Moreover, the proposed method can decrease the execution time and improve the system’s throughput by increasing resource utilization
Mehran Yazdi; Narjes Pourjafarian; Mehrnaz Fani; Elahe Taherianfard
Volume 1, Issue 1 , February 2015, , Pages 19-28
Abstract
Template matching is a widely used technique in many of image processing and machine vision applications. In this paper we propose a new as well as a fast and reliable template matching algorithm which is invariant to Rotation, Scale, Translation and Brightness (RSTB) changes. For this purpose, we adopt ...
Read More
Template matching is a widely used technique in many of image processing and machine vision applications. In this paper we propose a new as well as a fast and reliable template matching algorithm which is invariant to Rotation, Scale, Translation and Brightness (RSTB) changes. For this purpose, we adopt the idea of ring projection transform (RPT) of image. In the proposed algorithm, two novel suggestions are offered that significantly increase the precision and performance of the previous methods. First, our algorithm works with Log-Spectrum of image instead of the image itself, this change increases the accuracy of matching, and secondly for boosting the speed of the searching strategy, a new and modified version of Imperialist Competitive Algorithm, MICA, is presented. This matching procedure avoids the searching algorithm from being trapped in local minimum by taking advantage of adding a modification step to ICA. The simulation results show the superiority of proposed method in comparison with the previous ones.