Recent Patents on Computer Science (v.9, #3)

Meet Our Editorial Board Member by Robert S.H. Istepanian (187-188).

Diophantine Edge Graceful Graph by Swaminathan A. Mariadoss, Sunita D';Silva (190-194).
Background: Graph labeling problems have interesting applications in coding theory, communication networks, optimal circuits' layouts and graph decomposition problems, as described in various patents.

Consider a graph G = (V,E), with |V| = p and |E| = q. If f:V → {0,1,2,.....p} is a bijective mapping and if f+ : E → {1,2,... .q} be defined by f+ (uv) = |f(u) - f(v)|, for u, v ε V, and if f+ is bijective, then the induced map f+ gives an edge graceful labeling.

Methods: In this paper, we label edge labels directly, taking labels from the solutions of a relevant Diophantine equations. This edge labeling has two steps: first we label vertices by f, then we induce labeling to edges. We have considered patents “System and method for making decisions using network-guided decision trees with multivariate splits” and “Graph-based ranking algorithms for text processing'”

Results: In section 1, we have some results on complete (m, h) trees, (m ≥ 2, h ≥ 1 ) leading to graceful, odd-edge graceful and almost edge-graceful labeling. In section 2, we have edge-labeled corona graphs.

Conclusion: A compact representation for the graph given in the form of integers which may be used in the application of graphs for which Diophantine Edge Graceful labeling is possible. The possibility of Diophantine Edge Graceful labeling of a graph may be known by structural properties of graphs .

Background: Breast cancer is diagnosed as the most leading and dangerous cancer in women all over the world. Recent patents have shown that breast cancer is the second leading cause of death worldwide, among other cancers. This paper presents a development of breast cancer diagnosis system in digital mammograms using multiresolution technique.

Methods: The proposed method uses the Ripplet Transform (RT) which holds potential properties for feature extraction. The input mammogram image is first decomposed with the ripplet transform at different scales. Then the statistical features are extracted from each scale that is used as a feature vector. The Support Vector Machine (SVM) classifier is used as a classifier to distinguish between normal and abnormal and to classify the abnormality between benign and malignant images.

Results: In this paper, the application of multiresolution based ripplet transform in feature extraction for mammogram classification has been demonstrated. A comparative analysis is performed with the features extracted from Gray Level Co-occurrence Matrix (GLCM), Discrete Wavelet Transform (DWT) and Curvelet Transform (CT). The experimental results demonstrate that the ripplet transform based feature extraction is an efficient and promising tool for the successful classification of digital mammograms. The average classification rate achieved for normal and abnormal is 94.41% with curvelet features, 92.68% with wavelet features, 91.75% with GLCM features. RT exploits the multiscale property along with high degree of directionality so it achieves relatively higher average classification rate of 95.08%. The average classification rate obtained abnormalities between benign and malignant through ripplet transform coefficients is 95.56%. Also the average classification rate obtained is 94.17% for curvelet, 93.61% for wavelet and 90.28% for GLCM.

Conclusion: In this paper, the advantages of ripplet transform in mammogram analysis are exploited and a new model is proposed using a ripplet transform for feature extraction and classification of mammogram images. The statistical features extracted from the ripplet transform coefficients are employed in the SVM to classify the ROI into normal and abnormal and to differentiate abnormality between benign and malignant. When compared to other multiresolution transforms, ripplet offers improvement by representing the images with singularities along smooth curves. The experimental results show that the proposed method using the ripplet transform coefficients achieves relatively improved classification rate than the other multiresolution feature extraction methods.

Background: An invention with societal acknowledgment has an incredible outcome on the comfort and effectively of a regular life. Shopping at enormous shopping centers is almost a routine in the urban communities. We see a huge rush in these shopping centers on vacations and weekends. Individuals buy diverse things and place the items in the cart. After one completes the shopping, customers are supposed to go to the billing section for payments. At the billing counter, the cashier reads the bill utilizing the standardized identification (barcodes) reader which is an extreme waste of time resulting in the long waiting queue in the billing section.

Methods: Nowadays, for the computerization of the shopping centers, we are building up a microcontroller based smart automatic shopping cart, also described in various patents. The proposed framework has three modules. The first module is the purchase item module, placed in the shopping cart .The second module is the billing module that is available at the billing section to identify the cart number and bill details automatically. The third module consists of an RF based transmitter, which is used to recognize the offer zones, comprising of current offers and will intimate the clients about offers by having a pop-up message on the cart LCD screen.

Results: By utilizing this cart, the client can purchase substantial number of items in less time with less effort. This framework will help the stores to see a hike in their deals with pleased clients. We have considered patents 'Intelligent shopping cart of the supermarket' 'Intelligent shopping cart system and method used for the supermarket' 'Intelligent shopping cart, planning method and device of the travelling path of the intelligent shopping cart'

Conclusion: The Smart shopping cart is utilized as a part of the shopping complex for acquiring the items. In this venture, the RFID card is utilized as a security access for the item. Here we have three modules. In the purchase item module the creative in the payment strategy will keep away from the long line. The offer module intimates the client about the present offers by indicating the pop-up message on the purchase item module LCD screen. By knowing the billing details ahead of time the client can do the reasonable buy. The billing module overcomes the long holding up line via robotization of the billing procedure. The proposed shopping cart saves time, vitality and labor of the client, proprietor and the supplier.

Recognizing Faces Across Age Progressions and Under Occlusion by Steven L. Fernandes, Josemin G. Bala (209-215).
Background: Recognizing human faces across image processing is a difficult task mainly when it comes to age variation and occluded images. Aging causes a lot of variation in the human face and occlusion makes it difficult for us to recognize image of a person. Human faces undergo changes due to aging. These changes are affected by different factors and are subject to different age groups. In the early ages, like the childhood the facial shape is of importance and later on during the adulthood texture variations like wrinkles and pigmentation is seen. Age variation brings a major problem to the systems which recognize faces. Further it found that the task of identification is being complicated due to occlusions. Recognizing faces under occlusion mainly consists of registration and classification and there is very less work done in both of these areas.

Methods: In the paper, two novel techniques have been developed to recognize the human face which varies across age and under occlusion. Recognizing faces across Age Variations was proposed using the Sparse Representation technique. Recognizing faces under Occlusion was proposed using the Principal Component Analysis (PCA) Extraction and 1- Nearest Neighbor (NN) Classification techniques.

Results: From the analysis with various existing state of the art techniques, it was found that the proposed method to recognize faces across age variation using Sparse Representation Technique gives the best recognition rate of 81.81% on FGNET database. Also recognizing faces under occlusion using Principal Component Analysis extraction and 1-Nearest Neighbor Classification gives the best recognition rate of 95.890% on IIITD Disguise face database. We have considered the patents 'Three dimensional human face recognition method based on the intermediate frequency information in geometry image', 'The face identification method based on multiscale weber local descriptor and the kernel group sparse representation'.

Conclusion: In this paper, we have developed two novel approaches to recognize faces across age variations and under occlusion. The two novel techniques developed were compared across various existing state of the art techniques and validated across various standard public face databases. From our analysis we have found that the two novel techniques give the best recognition rates across age progressions and varying occlusion.

Background: Since multiplication dominates the execution time of most DSP algorithms, there is a need of high speed, area efficient and power efficient multiplier. Also, many patents emphasize on the fact that multiplication time is still the dominant factor in determining the instruction cycle time of a DSP chip. Hence, there is a need for high speed- low power multipliers. This can be achieved by using the concept of Vedic Mathematics as multipliers based on Vedic Mathematics is one of the fast and low power multiplier Urdhva tiryakbhyam, Nikhilam, sutras forms the basic Vedic formulas in the design of Vedic multipliers.

Method: Existing systems for multiplication is studied and latest trends are studied and proposed system is developed. The design of high speed, low power, and area efficient multiplier using modified Vedic Mathematical technique. Vedic Mathematics is the ancient branch of Mathematics, which has a unique technique of calculations based on 16 Sutras. These sutras are time efficient and reduce the time and space. The complexity of the multiplication is reduced as the unwanted steps are eliminated. In this paper, Urdhva Tryagbhyam sutra is applied to the two bit multiplier and the formula is slightly modified and applied for the multiplication of higher order bits. A 16x16 multiplier is designed and simulated using Xilinx simulator. The timing report is compared with other techniques of multiplication, such as Karatsuba Array multiplier and also modified Radix-2 technique.

Results: The proposed IC for multiplier is designed and implemented in Cadence Encounter and proved the efficient use of Vedic Sutra. The comparison of the timing report reveals that the proposed multiplier consumes less time and is faster than the existing multipliers. The power details prove that the proposed multiplier consumes less power. The proposed design is implemented in VLSI using CADENCE tools and the IC design of the proposed multiplier is presented.

Conclusion: A modified Vedic mathematical technique based high speed, low power, area efficient multiplier is designed. The multiplier IC is designed in VLSI using cadence tool and the floor planning and power planning is done. The primitive cells are placed and are routed. The proposed multiplier is faster and consumes less area and power in comparison with the existing multipliers. Hence, the implementation of proposed multiplier using modified Vedic mathematics has proved the efficient use of Vedic sutras and proved to be the efficient in comparison with the existing multipliers.

QoS Based Scheduling Algorithms in Energy Aware Cloud Environment by Prakash Kumar, Krishna Gopal, Jai P. Gupta (222-230).
Background: Energy consumption is a major issue in Cloud Computing environments. Its efficient use can benefit in many ways such as cost saving, efficient utilization of resources and also saving the environment, as energy consumption, cost and time are important and decision making factors for both user as well as cloud service providers. QoS conscious scheduling of jobs along with energy awareness is very important, especially in cloud environment, where large datacenters are to be maintained and at the same time huge computations are involved. Optimal resource usage and price reduction are the direct and operational benefits for both users as well as the service providers. A substantial amount of energy is consumed by the underlying system resources. Hence, energy aware computations and scheduling is a big future concern that may heavily contribute to maintain the nature's environmental systems, ecological balances and may avoid direct and indirect health hazards to all living beings. Omnidirectional benefits are the outcome of using energy aware scheduling techniques for Cloud environments and that too without compromising the Quality of Service.

Methods: Software based scheduling and testing is done with DVFS (Dynamic Voltage and Frequency Scaling) based experiments for minimizing the processing cost, makespan time in Energy Aware environment so that in addition to energy saving, the Quality of Service is not compromised. Simulations are done using CloudSim with combinations of various Quality of Service (QoS) parameters along with the combinations of energy aware VM allocation policies. A comparison of these algorithms is shown with the normally used existing algorithms based on the Processing Cost, MakespanTime and Energy Utilization parameters.

Results: A combination of Max-Min scheduling algorithm for cloudlet or task scheduling with Minimum Used Host scheduling algorithm for virtual machine allocation gives the most efficient environment in terms of Processing Cost, Makespan Time and Energy Consumption maintain the QoS.

Conclusion: It is observed that adoption of modified, conscious and logical scheduling policy in Cloud environments may drastically improve the QoS and save the energy usage as well, which is extremely important for huge Data Centers used in Cloud Environments, as described in various patents.

Background: It is very difficult for a physically handicapped people to lead their life as they are completely dependent on others for their needs. So it is essential to provide aid for the physically handicapped people to lead their life normally. Aim of our paper is to provide an assistant to physically disabled.

Methods: Existing systems and patents to aid physically disabled is studied and latest trends are studied and proposed system is developed.

Results: The prototype of the proposed system is developed using aurdino and different sensors are interfaced with the controller and tested with a patient. The system responded properly in fulfilling his basic needs.

Conclusion: A novel prototype is designed and implemented using Aurdino microcontroller. This prototype helps to monitor and assist the physically disabled people. It helps the physically challenged people to lead their life in a very comfortable way. The basic requirements of the patients are fulfilled by the basic head movements and thus the physical condition of the patient is monitored. The proposed prototype is very helpful to the physically handicapped in fulfilling their needs. The proposed prototype is implemented and tested using Aurdino.

Symbolic Representation Based Approach for Object Identification in Infrared Images by Shimoga N. B. Bhushan, Harisha, Arti Pawar, Vidyalakshmi (235-240).
Background: In most of the applications we expect that security cameras or surveillance camera systems should work on an around the clock basis, as also described in various patents. Normal cameras which produce images in visible spectrum are not effective with the absence of natural light. Even though some solutions are there for such problems like morning time natural illumination and night time artificial illumination arrangement for the camera, but these solutions are not practically advisable. Also these solutions have limitations like shadow in the morning or day time. As a result, system may fail in capturing and identifying objects in the dark areas of the current environment. One of the solutions for such problems is usage of Infrared IR cameras instead of visible spectrum cameras.

Methods: This article proposes a novel method of representing infrared images by the use of edgelet features for object recognition application. The proposed technique makes use of interval valued representation for edgelet features of the infrared images. A scheme of identification of the objects based on the proposed features extraction and representation model is also designed.

Results: Experiments conducted to show the effectiveness of the proposed method on publically available IR corpuses viz OSU Thermal Pedestrian Database, Multispectral Image Database and Indoors and Outdoors datasets. Two sets of experimentations are conducted where each set contain three different trails. In the first set of experiments, we have used 40% of the database for training and remaining 60% is used for testing. In the second set of experiments, we have used 60% training and 40% for testing. In each trail we have randomly selected training and testing samples. For the purpose of evaluation of the results, we have calculated precision, recall and f-measure for each trail. The details of the experiments are shown in the Table 1.

Conclusion: This article presents a novel method of representing infrared images by the use of edgelet features for object recognition applications. The proposed technique makes use of interval valued representation for edgelet features for identification of the objects in the infrared images. A method of identification of the objects based on the proposed edgelet features and interval valued representation model is also proposed. Since the features are transformed into interval valued representation, the proposed model drastically reduces the dimension of the feature space which intern reduces the computational time for object recognition in the infrared images. The proposed algorithm is critically analyzed on three publically available corpuses. Further an extensive experimentation is conducted on publically available datasets.

Background: Wireless sensors networks (WSNs) are now widely utilized for detection events, especially combined with data fusion technology that applies a new opportunity in this realm. The aims of this paper is to introduce readers to the existing multi-sensors fusion models and common algorithms, to conduct an illustration of indexing events approaches in multimedia sensors data, and provides an extensive analysis on existing patents of how the georeferenced media data related to the fusion process and how the associated resources to mine semantic knowledge.

Methods: Research related to modeling and detection events in georeferenced multimedia fusion is reviewed. Attributes effect on the fusion and mine semantic knowledge process, event-based indexing methods, and examines some recent patents in the application are provided.

Results: Due to the prodigious development of sensors such as cameras and smart phones, the georeferenced concept has been infiltrated into kinds of media data, and the living methods has still not catch the expectation of people and confront quite lots of challenges, the future researching has to deal with subjective and objective factors.

Conclusion: The paper gives an overview and comments on the existing multi-sensor data fusion common approaches and applicative areas, besides in the further analysis, it has conducted on the issue how the georeferenced media effects on the fusion process and contributes to mine the relevancy between the sensors. Then based on the detection events, the paper preliminary discusses the event-based indexing methods of multimedia data fusion, and some possible methods and existing patents of additional are given.

Background: According to nodes' participating style, routing protocols can be classified into three categories, namely, direct communication, flat and clustering protocols. In direct communication protocols, a sensor node sends data directly to the sink. Under this protocol, if the diameter of the network is large, the power of sensor nodes will be drained very quickly. Furthermore, as the number of sensor nodes increases, collision becomes a significant factor which defeats the purpose of data transmission. Under flat protocols, all nodes in the network are treated equally. When a node needs to send data, it may find a route consisting of several hops to the sink. Normally, the probability of participating in the data transmission process is higher for the nodes around the sink than those nodes far away from the sink. So, the nodes around the sink could run out of their power soon. In the clustered routing architecture, nodes are grouped into clusters, and a dedicated cluster head node collects, processes, and forwards the data from all the sensor nodes within its cluster. One of the most critical issues in wireless sensor networks is represented by the limited availability of energy on network nodes thus, making good use of energy is necessary to increase network lifetime.

Methods: Review of literature survey and patents has been conducted on the use of artificial neural networks with wireless sensor networks. While working with wireless sensor network the key concern is energy conservation. It is quite important to extend the network's lifetime and to reduce the energy consumption as the nodes used are battery powered and provided with a fixed amount of initial energy. So for such a target, we are localizing the base-station of the network. The base-station of the network should be placed in such a way that the energy consumption of nodes is reduced to some extent. We are using the artificial neural network approach for finding the optimized position of the base station.

Results: The proposed base-station localization algorithm using artificial neural network is when embedded to LEACH protocol, then it gives better result than the LEACH protocol in which the basestation is positioned at the corners of area where sensor nodes are deployed. The results are better as shown in above figures, in terms of - number of nodes dead to number of rounds, energy consumption (in Joules) to number of rounds, number of nodes dying in terms of number of rounds and the number of packets that are being transmitted to the base-station and the cluster heads. It can be concluded that LEACH with ANN provides an energy efficient scheme.

Conclusion: In this work, we have applied ANN in base-station localization and embedded in LEACH protocol. But in future, we can further enhance network lifetime by applying ANN in cluster head selection mechanism of LEACH and LEACH-C protocol.

An Energy Saving Algorithm (ESA) For Wireless Sensor Networks: Testing and Evaluation by Muneer B. Yassein, Safwan Omari, Enas H.A. Yabes, Shadi Aljawarneh (260-273).
Background: Energy consumption is one of the most critical issues that is considered in designing and improving routing protocols in wireless sensor networks; since the sensor nodes are equipped with limited amount of energy, and recharging these nodes is almost impossible, reducing energy consumption and raising the lifetime of wireless sensor networks has gained increasing attention from researchers.

Methods: In this paper, we present a Network Energy Saving Algorithm (ESA) for extending the lifetime of wireless sensor networks by coordinating active and sleeping nodes according to their residual energy, and topological state for each node. ESA is implemented and combined with the leach routing protocol in wireless sensor networks. The underlying motivation is to further decrease the power dissipation, balancing power dissipation between nodes and maximizing the network life time. We also reviewed a number of papers in recent patents in computer science from 2008 to 2014.

Results: In order to estimate the efficiency of our suggested algorithm, we compared our outcomes to the well standard algorithm (LEACH) which is implemented on OMNeT++ simulator. Several performance metrics were used in our evaluation including Average Residual Energy, Rounds until First Node Dies, and Rounds until Half nodes die, and Percentage of High Energy Nodes. The simulation results present improved performance of ESA in terms of total power consumption and number of live nodes of the network system over LEACH, K-Means and direct methods. On average, ESA increases Average Residual Energy by 22.4%, and decreases Standard Deviation for Average Energy by 40%. We also found that ESA increases the number of Rounds until First Node Dies and Until Half Nodes Die by 3.1 times and 88%, respectively. Finally, we observe that ESA increased the percentage of High Energy Nodes by 3.5 times.

Conclusion: It is necessary in wireless sensor networks to decrease the energy consumption of sensor networks by making certain computations and operations for this purpose; the cost of computation is less than the cost of transmitting data even if the computation is done on the sensor nodes themselves. Many energy saving algorithms for wireless sensor networks have been proposed in the recent years. In this paper, a network energy saving algorithm, (ESA) is proposed, which depends on the residual energy for the node and its topological state to determine the time for sleeping and keeping active.

Patent Selections: (274-276).