Search by property

From ISLAB/CAISR
Jump to navigationJump to search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Abstract" with value "<p>N/A</p>". Since there have been only a few results, also nearby values are displayed.

Showing below up to 26 results starting with #1.

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)


    

List of results

  • Publications:Eigen-Patch Iris Super-Resolution For Iris Recognition Improvement  + (<p>Low image resolution will be a pr<p>Low image resolution will be a predominant factor in iris recognition systems as they evolve towards more relaxed acquisition conditions. Here, we propose a super-resolution technique to enhance iris images based on Principal Component Analysis (PCA) Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information and reducing artifacts. We validate the system used a database of 1,872 near-infrared iris images. Results show the superiority of the presented approach over bilinear or bicubic interpolation, with the eigen-patch method being more resilient to image resolution reduction. We also perform recognition experiments with an iris matcher based 1D Log-Gabor, demonstrating that verification rates degrades more rapidly with bilinear or bicubic interpolation.</p>idly with bilinear or bicubic interpolation.</p>)
  • Publications:Situation Awareness in Colour Printing and Beyond  + (<p>Machine learning methods are incr<p>Machine learning methods are increasingly being used to solve real-world problems in the society. Often, the complexity of the methods are well hidden for users. However, integrating machine learning methods in real-world applications is not a straightforward process and requires knowledge both about the methods and domain knowledge of the problem. Two such domains are colour print quality assessment and anomaly detection in smart homes, which are currently driven by manual monitoring of complex situations. The goal of the presented work is to develop methods, algorithms and tools to facilitate monitoring and understanding of the complex situations which arise in colour print quality assessment and anomaly detection for smart homes. The proposed approach builds on the use and adaption of supervised and unsupervised machine learning methods.</p><p>Novel algorithms for computing objective measures of print quality in production are proposed in this work. Objective measures are also modelled to study how paper and press parameters influence print quality. Moreover, a study on how print quality is perceived by humans is presented and experiments aiming to understand how subjective assessments of print quality relate to objective measurements are explained. The obtained results show that the objective measures reflect important aspects of print quality, these measures are also modelled with reasonable accuracy using paper and press parameters. The models of objective  measures are shown to reveal relationships consistent to known print quality phenomena.</p><p>In the second part of this thesis the application area of anomaly detection in smart homes is explored. A method for modelling human behaviour patterns is proposed. The model is used in order to detect deviating behaviour patterns using contextual information from both time and space. The proposed behaviour pattern model is tested using simulated data and is shown to be suitable given four types of scenarios.</p><p>The thesis shows that parts of offset lithographic printing, which traditionally is a human-centered process, can be automated by the introduction of image processing and machine learning methods. Moreover, it is concluded that in order to facilitate robust and accurate anomaly detection in smart homes, a holistic approach which makes use of several contextual aspects is required.</p>, a holistic approach which makes use of several contextual aspects is required.</p>)
  • Publications:A Segmentation-Free Approach to Recognise Printed Sinhala Script  + (<p>Majority of character recognition<p>Majority of character recognition algorithms such as the use of ANNs needs segmentation of the script prior to recognition. Contrast to Western scripts, Brahmi descended South Asian scripts such as Sinhala consist of modifier symbols, which make the segmentation a difficult task that needs to be addressed as a separate issue. Further, the change of shape of the basic character (by violating modification rules) in the modification process makes some modified Sinhala characters impossible to segment. The proposed method, which uses Linear Symmetry to examine a co-relation between characters in the script with the testing alphabet, recognises characters directly within the image of the script. A similar method is used to resolve confusing characters. Experiments show highly favourable results not only for the basic characters of the alphabet but also for the modifier symbols. A novel but simple method using Linear Symmetry for skew correction has also been proposed.</p> for skew correction has also been proposed.</p>)
  • Publications:Evaluation of Self-Organized Approach for Predicting Compressor Faults in a City Bus Fleet  + (<p>Managing the maintenance of a com<p>Managing the maintenance of a commercial vehicle fleet is an attractive application domain of ubiquitous knowledge discovery. Cost effective methods for predictive maintenance are progressively demanded in the automotive industry. The traditional diagnostic paradigm that requires human experts to define models is not scalable to today's vehicles with hundreds of computing units and thousands of control and sensor signals streaming through the on-board controller area network. A more autonomous approach must be developed. In this paper we evaluate the performance of the COSMO approach for automatic detection of air pressure related faults on a fleet of city buses. The method is both generic and robust. Histograms of a single pressure signal are collected and compared across the fleet and deviations are matched against workshop maintenance and repair records. It is shown that the method can detect several of the cases when compressors fail on the road, well before the failure. The work is based on data from a three year long field study involving 19 buses operating in and around a city on the west coast of Sweden. © The Authors. Published by Elsevier B.V.</p>n. © The Authors. Published by Elsevier B.V.</p>)
  • Publications:Identification of Gait Events using Expert Knowledge and Continuous Wavelet Transform Analysis  + (<p>Many gait analysis applications i<p>Many gait analysis applications involve long-term or continuous monitoring which require gait measurements to be taken outdoors. Wearable inertial sensors like accelerometers have become popular for such applications as they are miniature, low-powered and inexpensive but with the drawback that they are prone to noise and require robust algorithms for precise identification of gait events. However, most gait event detection algorithms have been developed by simulating physical world environments inside controlled laboratories. In this paper, we propose a novel algorithm that robustly and efficiently identifies gait events from accelerometer signals collected during both, indoor and outdoor walking of healthy subjects. The proposed method makes adept use of prior knowledge of walking gait characteristics, referred to as expert knowledge, in conjunction with continuous wavelet transform analysis to detect gait events of heel strike and toe off. It was observed that in comparison to indoor, the outdoor walking acceleration signals were of poorer quality and highly corrupted with noise. The proposed algorithm presents an automated way to effectively analyze such noisy signals in order to identify gait events.</p>sy signals in order to identify gait events.</p>)
  • Publications:Ethiopic Character Recognition Using Direction Field Tensor  + (<p>Many languages in Ethiopia use a <p>Many languages in Ethiopia use a unique alphabet called Ethiopic for writing. However, there is no OCR system developed to date. In an effort to develop automatic recognition of Ethiopic script, a novel system is designed by applying structural and syntactic techniques. The recognition system is developed by extracting primitive structural features and their spatial relationships. A special tree structure is used to represent the spatial relationship of primitive structures. For each character, a unique string pattern is generated from the tree and recognition is achieved by matching the string against a stored knowledge base of the alphabet. To implement the recognition system, we use direction field tensor as a tool for character segmentation, and extraction of structural features and their spatial relationships. Experimental results are reported.</p>ionships. Experimental results are reported.</p>)
  • Publications:Directionality Features and the Structure Tensor  + (<p>Many low-level features, as well <p>Many low-level features, as well as varyingmethods of extraction and interpretation rely on directionalityanalysis (for example the Hough transform, Gabor filters,SIFT descriptors and the structure tensor). The theoryof the gradient based structure tensor (a.k.a. the secondmoment matrix) is a very well suited theoretical platform inwhich to analyze and explain the similarities and connections(indeed often equivalence) of supposedly different methodsand features that deal with image directionality. Of specialinterest to this study is the SIFT descriptors (histogram oforiented gradients, HOGs). Our analysis of interrelationshipsof prominent directionality analysis tools offers thepossibility of computation of HOGs without binning, in analgorithm of comparative time complexity.</p> analgorithm of comparative time complexity.</p>)
  • Publications:Histogram of directions by the structure tensor  + (<p>Many low-level features, as well <p>Many low-level features, as well as varying methods of extraction and interpretation rely on directionality analysis (for example the Hough transform, Gabor filters, SIFT descriptors and the structure tensor). The theory of the gradient based structure tensor (a.k.a. the second moment matrix) is a very well suited theoretical platform in which to analyze and explain the similarities and connections (indeed often equivalence) of supposedly different methods and features that deal with image directionality. Of special interest to this study is the SIFT descriptors (histogram of oriented gradients, HOGs). Our analysis of interrelationships of prominent directionality analysis tools offers the possibility of computation of HOGs without binning, in an algorithm of comparative time complexity.</p>an algorithm of comparative time complexity.</p>)
  • Publications:Causal discovery using clusters from observational data  + (<p>Many methods have been proposed o<p>Many methods have been proposed over the years for distinguishing causes from effects using observational data only, and new ones are continuously being developed – deducing causal relationships is difficult enough that we do not hope to ever get the perfect one. Instead, we progress by creating powerful heuristics, capable of capturing more and more of the hints that are present in real data.</p><p>One type of such hints, quite surprisingly rarely explicitly addressed by existing methods, is in-homogeneities in the data. Clusters are a very typical occurrence that should be taken into account, and exploited, in the process of identifying causes and effects. In this paper, we discuss the potential benefits, and explore the hints that clusters in the data can provide for causal discovery. We propose a new method, and show, using both artificial and real data, that accounting for clusters in the data leads to more accurate learning of causal structures.</p>ta leads to more accurate learning of causal structures.</p>)
  • Publications:A Trinocular Stereo System for Detection of Thin Horizontal Structures  + (<p>Many vision-based approaches for <p>Many vision-based approaches for obstacle detection often state that vertical thin structure is of importance, e.g. poles and trees. However, there are also problem in detecting thin horizontal structures. In an industrial case there are horizontal objects, e.g. cables and fork lifts, and slanting objects, e.g. ladders, that also has to be detected. This paper focuses on the problem to detect thin horizontal structures. We introduce a test apparatus for testing thin objects as a complement for the test pieces for human safety described in the European standard EN 1525 safety of industrial trucks - driverless trucks and their systems. The system uses three cameras, situated as a horizontal pair and a vertical pair, which makes it possible to also detect thin horizontal structures. A sparse disparity map based on edges and a dense disparity map is used to identify problems with a trinocular system. Both methods use the sum of absolute difference to compute the disparity maps. Tests show that the proposed trinocular system detects all objects at the test apparatus. If a sparse or dense method is used is not critical. Further work will implement the algorithm in real time and verify it on a final system in many types of scenery.</p> on a final system in many types of scenery.</p>)
  • Publications:Obstacle Detection For Thin Horizontal Structures  + (<p>Many vision-based approaches for <p>Many vision-based approaches for obstacle detection often state that vertical thin structure is of importance, e.g. poles and trees. However, there are also problem in detecting thin horizontal structures. In an industrial case there are horizontal objects, e.g. cables and fork lifts, and slanting objects, e.g. ladders, that also has to be detected. This paper focuses on the problem to detect thin horizontal structures. The system uses three cameras, situated as a horizontal pair and a vertical pair, which makes it possible to also detect thin horizontal structures. A comparison between a sparse disparity map based on edges and a dense disparity map with a column and row filter is made. Both methods use the Sum of Absolute Difference to compute the disparity maps. Special interest has been in scenes with thin horizontal objects. Tests show that the sparse dense method based on the Canny edge detector works better for the environments we have tested.</p> better for the environments we have tested.</p>)
  • Publications:The mass appraisal of the real estate by computational intelligence  + (<p>Mass appraisal is the systematic <p>Mass appraisal is the systematic appraisal of groups of properties as of a given date using standardized procedures and statistical testing. Mass appraisal is commonly used to compute real estate tax. There are three traditional real estate valuation methods: the sales comparison approach, income approach, and the cost approach. Mass appraisal models are commonly based on the sales comparison approach. The ordinary least squares (OLS) linear regression is the classical method used to build models in this approach. The method is compared with computational intelligence approaches - support vector machine (SVM) regression, multilayer perceptron (MLP), and a committee of predictors in this paper. All the three predictors are used to build a weighted data-depended committee. A self-organizing map (SOM) generating clusters of value zones is used to obtain the data-dependent aggregation weights. The experimental investigations performed using data cordially provided by the Register center of Lithuania have shown very promising results. The performance of the computational intelligence-based techniques was considerably higher than that obtained using the official real estate models of the Register center. The performance of the committee using the weights based on zones obtained from the SOM was also higher than of that exploiting the real estate value zones provided by the Register center. (C) 2009 Elsevier B.V. All rights reserved</p>. (C) 2009 Elsevier B.V. All rights reserved</p>)
  • Publications:Predicting the need for vehicle compressor repairs using maintenance records and logged vehicle data  + (<p>Methods and results are presented<p>Methods and results are presented for applying supervised machine learning techniques to the task of predicting the need for repairs of air compressors in commercial trucks and buses. Prediction models are derived from logged on-board data that are downloaded during workshop visits and have been collected over three years on large number of vehicles. A number of issues are identified with the data sources, many of which originate from the fact that the data sources were not designed for data mining. Nevertheless, exploiting this available data is very important for the automotive industry as means to quickly introduce predictive maintenance solutions. It is shown on a large data set from heavy duty trucks in normal operation how this can be done and generate a profit.</p><p>Random forest is used as the classifier algorithm, together with two methods for feature selection whose results are compared to a human expert. The machine learning based features outperform the human expert features, which supports the idea to use data mining to improve maintenance operations in this domain. © 2015 Elsevier Ltd.</p>ntenance operations in this domain. © 2015 Elsevier Ltd.</p>)
  • Publications:Consensus self-organized models for fault detection (COSMO)  + (<p>Methods for equipment monitoring <p>Methods for equipment monitoring are traditionally constructed from specific sensors and/or knowledge collected prior to implementation on the equipment. A different approach is presented here that builds up knowledge over time by exploratory search among the signals available on the internal field-bus system and comparing the observed signal relationships among a group of equipment that perform similar tasks. The approach is developed for the purpose of increasing vehicle uptime, and is therefore demonstrated in the case of a city bus and a heavy duty truck. However, it also works fine for smaller mechatronic systems like computer hard-drives. The approach builds on an onboard self-organized search for models that capture relations among signal values on the vehicles’ data buses, combined with a limited bandwidth telematics gateway and an off-line server application where the parameters of the self-organized models are compared. The presented approach represents a new look at error detection in commercial mechatronic systems, where the normal behavior of a system is actually found under real operating conditions, rather than the behavior observed in a number of laboratory tests or test-drives prior to production of the system. The approach has potential to be the basis for a self-discovering system for general purpose fault detection and diagnostics.</p>ral purpose fault detection and diagnostics.</p>)
  • Publications:Partial Fingerprint Registration for Forensics using Minutiae-generated Orientation Fields  + (<p>Minutia based matching scheme is <p>Minutia based matching scheme is the most widely accepted method for both automated as well as manual (forensic) fingerprint matching. The scenario of comparing a partial fingerprint minutia set against a full fingerprint minutia set is a challenging problem. In this work, we propose a method to register the orientation field of the partial fingerprint minutia set to that of the orientation field of full fingerprint minutia set. As a consequence of registering the partial fingerprint orientation field, we obtain extra information that can augment a minutia based matcher by reducing the search space of minutiae in the full fingerprint. We present the accuracy of our registration algorithm on NIST-SD27 database, reporting separately for both subjective and quantitative quality classification of NIST-SD27. The registration performance accuracy is measured in terms of percentage of ground truth minutiae present in the reduced minutiae search space generated by our algorithm. ©2014 IEEE.</p>pace generated by our algorithm. ©2014 IEEE.</p>)
  • Publications:Interpretation and Alignment of 2D Indoor Maps : Towards a Heterogeneous Map Representation  + (<p>Mobile robots are increasingly be<p>Mobile robots are increasingly being used in automation solutions with notable examples in service robots, such as home-care, and warehouses. Autonomy of mobile robots is particularly challenging, since their work space is not deterministic, known a priori, or fully predictable. Accordingly, the ability to model the work space, that is robotic mapping, is among the core technologies that are the backbone of autonomous mobile robots. However, for some applications the abilities of mapping and localization do not meet all the requirements, and robots with an enhanced awareness of their surroundings are desired. For instance, a map augmented with semantic labels is instrumental to support Human-Robot Interaction and high-level task planning and reasoning.This thesis addresses this requirement through an interpretation and integration of multiple input maps into a semantically annotated heterogeneous representation. The heterogeneity of the representation should to contain different interpretations of an input map, establish and maintain associations among different input sources, and construct a hierarchy of abstraction through model-based representation. The structuring and construction of this representation are at the core of this thesis, and the main objectives are: a) modeling, interpretation, semantic annotation, and association of the different data sources into a heterogeneous representation, and b) improving the autonomy of the aforementioned processes by curtailing the dependency of the methods on human input, such as domain knowledge.This work proposes map interpretation techniques, such as abstract representation through modeling and semantic annotation, in an attempt to enrich the final representation. In order to associate multiple data sources, this work also proposes a map alignment method. The contributions and general observations that result from the studies included in this work could be summarized as: i) manner of structuring the heterogeneous representation, ii) underlining the advantages of modeling and abstract representations, iii) several approaches to semantic annotation, and iv) improved extensibility of methods by lessening their dependency on human input.The scope of the work has been focused on 2D maps of well-structured indoor environments, such as warehouses, home, and office buildings.</p>h as warehouses, home, and office buildings.</p>)
  • Publications:Modeling Electromechanical Aspects of Cyber-Physical Systems  + (<p>Model-based tools have the potent<p>Model-based tools have the potential to significantly improve the process of developing novel cyber-physical systems (CPS). In this paper, we consider the question of what language features are needed to model such systems. We use a small, experimental hybrid systems modeling language to show how a number of basic and pervasive aspects of cyber-physical systems can be modeled concisely using the small set of language constructs. We then consider four, more complex, case studies from the domain of robotics. The first, a quadcopter, illustrates that these constructs can support the modeling of interesting systems. The second, a serial robot, provides a concrete example of why it is important to support static partial derivatives, namely, that it significantly improves the way models of rigid body dynamics can be expressed. The third, a linear solenoid actuator, illustrates the language’s ability to integrate multiphysics subsystems. The fourth and final, a compass gait biped, shows how a hybrid system with non-trivial dynamics is modeled. Through this analysis, the work establishes a strong connection between the engineering needs of the CPS domain and the language features that can address these needs. The study builds the case for why modeling languages can be improved by integrating several features, most notably, partial derivatives, differentiation without duplication, and support for equations. These features do not appear to be addressed in a satisfactory manner in mainstream modeling and simulation tools.</p>in mainstream modeling and simulation tools.</p>)
  • Publications:Predicting Air Compressor Failures with Echo State Networks  + (<p>Modern vehicles have increasing a<p>Modern vehicles have increasing amounts of data streaming continuously on-board their controller area networks. These data are primarily used for controlling the vehicle and for feedback to the driver, but they can also be exploited to detect faults and predict failures. The traditional diagnostics paradigm, which relies heavily on human expert knowledge, scales poorly with the increasing amounts of data generated by highly digitised systems. The next generation of equipment monitoring and maintenance prediction solutions will therefore require a different approach, where systems can build up knowledge (semi-)autonomously and learn over the lifetime of the equipment.</p><p>A key feature in such systems is the ability to capture and encode characteristics of signals, or groups of signals, on-board vehicles using different models. Methods that do this robustly and reliably can be used to describe and compare the operation of the vehicle to previous time periods or to other similar vehicles. In this paper two models for doing this, for a single signal, are presented and compared on a case of on-road failures caused by air compressor faults in city buses. One approach is based on histograms and the other is based on echo state networks. It is shown that both methods are sensitive to the expected changes in the signal's characteristics and work well on simulated data. However, the histogram model, despite being simpler, handles the deviations in real data better than the echo state network.</p>iations in real data better than the echo state network.</p>)
  • Publications:Wisdom of the Crowd for Fault Detection and Prognosis  + (<p>Monitoring and maintaining the eq<p>Monitoring and maintaining the equipment to ensure its reliability and availability is vital to industrial operations. With the rapid development and growth of interconnected devices, the Internet of Things promotes digitization of industrial assets, to be sensed and controlled across existing networks, enabling access to a vast amount of sensor data that can be used for condition monitoring. However, the traditional way of gaining knowledge and wisdom, by the expert, for designing condition monitoring methods is unfeasible for fully utilizing and digesting this enormous amount of information. It does not scale well to complex systems with a huge amount of components and subsystems. Therefore, a more automated approach that relies on human experts to a lesser degree, being capable of discovering interesting patterns, generating models for estimating the health status of the equipment, supporting maintenance scheduling, and can scale up to many equipment and its subsystems, will provide great benefits for the industry. </p><p>This thesis demonstrates how to utilize the concept of "Wisdom of the Crowd", i.e. a group of similar individuals, for fault detection and prognosis. The approach is built based on an unsupervised deviation detection method, Consensus Self-Organizing Models (COSMO). The method assumes that the majority of a crowd is healthy; individual deviates from the majority are considered as potentially faulty. The COSMO method encodes sensor data into models, and the distances between individual samples and the crowd are measured in the model space. This information, regarding how different an individual performs compared to its peers, is utilized as an indicator for estimating the health status of the equipment. The generality of the COSMO method is demonstrated with three condition monitoring case studies: i) fault detection and failure prediction for a commercial fleet of city buses, ii) prognosis for a fleet of turbofan engines and iii) finding cracks in metallic material. In addition, the flexibility of the COSMO method is demonstrated with: i) being capable of incorporating domain knowledge on specializing relevant expert features; ii) able to detect multiple types of faults with a generic data- representation, i.e. Echo State Network; iii) incorporating expert feedback on adapting reference group candidate under an active learning setting. Last but not least, this thesis demonstrated that the remaining useful life of the equipment can be estimated from the distance to a crowd of peers. </p>can be estimated from the distance to a crowd of peers. </p>)
  • Publications:Monitoring equipment operation through model and event discovery  + (<p>Monitoring the operation of compl<p>Monitoring the operation of complex systems in real-time is becoming both required and enabled by current IoT solutions. Predicting faults and optimising productivity requires autonomous methods that work without extensive human supervision. One way to automatically detect deviating operation is to identify groups of peers, or similar systems, and evaluate how well each individual conforms with the group. We propose a monitoring approach that can construct knowledge more autonomously and relies on human experts to a lesser degree: without requiring the designer to think of all possible faults beforehand; able to do the best possible with signals that are already available, without the need for dedicated new sensors; scaling up to “one more system and component” and multiple variants; and finally, one that will adapt to changes over time and remain relevant throughout the lifetime of the system. © Springer Nature Switzerland AG 2018.</p>stem. © Springer Nature Switzerland AG 2018.</p>)
  • Publications:Mode tracking using multiple data streams  + (<p>Most existing work in information<p>Most existing work in information fusion focuses on combining information with well-defined meaning towards a concrete, pre-specified goal. In contradistinction, we instead aim for autonomous discovery of high-level knowledge from ubiquitous data streams. This paper introduces a method for recognition and tracking of hidden conceptual modes, which are essential to fully understand the operation of complex environments. We consider a scenario of analyzing usage of a fleet of city buses, where the objective is to automatically discover and track modes such as highway route, heavy traffic, or aggressive driver, based on available on-board signals. The method we propose is based on aggregating the data over time, since the high-level modes are only apparent in the longer perspective. We search through different features and subsets of the data, and identify those that lead to good clusterings, interpreting those clusters as initial, rough models of the prospective modes. We utilize Bayesian tracking in order to continuously improve the parameters of those models, based on the new data, while at the same time following how the modes evolve over time. Experiments with artificial data of varying degrees of complexity, as well as on real-world datasets, prove the effectiveness of the proposed method in accurately discovering the modes and in identifying which one best explains the current observations from multiple data streams. © 2017 Elsevier B.V. All rights reserved.</p>s. © 2017 Elsevier B.V. All rights reserved.</p>)
  • Publications:A Symbolic Approach to Human Motion Analysis Using Inertial Sensors : Framework and Gait Analysis Study  + (<p>Motion analysis deals with determ<p>Motion analysis deals with determining what and how activities are being performed by a subject, through the use of sensors. The process of answering the what question is commonly known as classification, and answering the how question is here referred to as characterization. Frequently, combinations of inertial sensor such as accelerometers and gyroscopes are used for motion analysis. These sensors are cheap, small, and can easily be incorporated into wearable systems.</p><p>The overall goal of this thesis was to improve the processing of inertial sensor data for the characterization of movements. This thesis presents a framework for the development of motion analysis systems that targets movement characterization, and describes an implementation of the framework for gait analysis. One substantial aspect of the framework is symbolization, which transforms the sensor data into strings of symbols. Another aspect of the framework is the inclusion of human expert knowledge, which facilitates the connection between data and human concepts, and clarifies the analysis process to a human expert.</p><p>The proposed implementation was compared to state of practice gait analysis systems, and evaluated in a clinical environment. Results showed that expert knowledge can be successfully used to parse symbolic data and identify the different phases of gait. In addition, the symbolic representation enabled the creation of new gait symmetry and gait normality indices. The proposed symmetry index was superior to many others in detecting movement asymmetry in early-to-mid-stage Parkinson's Disease patients. Furthermore, the normality index showed potential in the assessment of patient recovery after hip-replacement surgery. In conclusion, this implementation of the gait analysis system illustrated that the framework can be used as a road map for the development of movement analysis systems.</p>used as a road map for the development of movement analysis systems.</p>)
  • Publications:A new measure of movement symmetry in early Parkinson's disease patients using symbolic processing of inertial sensor data  + (<p>Movement asymmetry is one of the <p>Movement asymmetry is one of the motor symptoms associated with Parkinson's Disease (PD). Therefore, being able to detect and measure movement symmetry is important for monitoring the patient's condition.</p><p>The present paper introduces a novel symbol based symmetry index calculated from inertial sensor data. The method is explained, evaluated and compared to six other symmetry measures. These measures were used to determine the symmetry of both upper and lower limbs during walking of 11 early-to-mid-stage PD patients and 15 control subjects. The patients included in the study showed minimal motor abnormalities according to the Unified Parkinson's Disease Rating Scale (UPDRS).</p><p>The symmetry indices were used to classify subjects into two different groups corresponding to PD or control. The proposed method presented high sensitivity and specificity with an area under the Receiver Operating Characteristic (ROC) curve of 0.872, 9\% greater than the second best method. The proposed method also showed an excellent Intraclass Correlation Coefficient (ICC) of 0.949, 55\% greater than the second best method. Results suggest that the proposed symmetry index is appropriate for this particular group of patients.</p>symmetry index is appropriate for this particular group of patients.</p>)
  • Publications:An Experimental Study of Using Rule Induction Algorithm in Combiner Multiple Classifier  + (<p>Multiple classifiers consist of s<p>Multiple classifiers consist of sets of subclassifiers, whose individual predictions are combined to classify new objects. These approaches attract an interest of researchers as they can outperform single classifiers on wide range of classification problems. This paper presents an experimental study of using the rule induction algorithm MODLEM in the multiple classifier scheme called combiner, which is a specific meta learning approach to aggregate answers of component classifiers. Our experimental results show that the improvement of predictive accuracy depends on the independence of errors made by the base classifiers. Moreover, we summarise our experience with using MODLEM as component in other multiple classifiers, namely bagging and n2 classifiers.</p>ssifiers, namely bagging and n2 classifiers.</p>)
  • Publications:Exploiting statistical energy test for comparison of multiple groups in morphometric and chemometric data  + (<p>Multivariate permutation-based en<p>Multivariate permutation-based energy test of equal distributions is considered here. Approach is attributable to the emerging field of ε-statistics and uses natural logarithm of Euclidean distance for within-sample and between-sample components. Result from permutations is enhanced by a tail approximation through generalized Pareto distribution to boost precision of obtained p-values. Generalization from two-sample case to multiple samples is achieved by combining p-values through meta-analysis. Several strategies of varied statistical power are possible, while a maximum of all pairwise p-values is chosen here. Proposed approach is tested on several morphometric and chemometric data sets. Each data set is additionally transformed by principal component analysis for the purpose of dimensionality reduction and visualization in 2D space. Variable selection, namely, sequential search and multi-cluster feature selection, is applied to reveal in what aspects the groups differ most.</p><p>Morphometric data sets used: 1) survival data of house sparrows Passer domesticus; 2) orange and blue varieties of rock crabs Leptograpsus variegatus; 3) ontogenetic stages of trilobite species Trimerocephalus lelievrei; 4) marine phytoplankton species Prorocentrum minimum.</p><p>Chemometric data sets used: 1) essential oils composition of medicinal plant Hyptis suaveolensspecimens; 2) chemical information of olive oil samples; 3) elemental composition of biomass ash; 4) exchangeable cations of earth metals in forest soil samples.</p><p>Statistically significant differences between groups were successfully indicated, but the selection of variables had a profound effect on the result. Permutation-based energy test and it’s multi-sample generalization through meta-analysis proved useful as an unbalanced non-parametric MANOVA approach. Introduced solution is simple, yet flexible and powerful, and by no means is confined to morphometrics or chemometrics alone, but has a wide range of potential applications. Copyright © 2015 Elsevier B.V.</p>, but has a wide range of potential applications. Copyright © 2015 Elsevier B.V.</p>)
  • Publications:Laser-Based Navigation Enhanced with 3D Time-of-Flight Data  + (<p>Navigation and obstacle avoidance<p>Navigation and obstacle avoidance in robotics using planar laser scans has matured over the last decades. They basically enable robots to penetrate highly dynamic and populated spaces, such as people's home, and move around smoothly. However, in an unconstrained environment the twodimensional perceptual space of a fixed mounted laser is not sufficient to ensure safe navigation. In this paper, we present an approach that pools a fast and reliable motion generation approach with modern 3D capturing techniques using a Timeof-Flight camera. Instead of attempting to implement full 3D motion control, which is computationally more expensive and simply not needed for the targeted scenario of a domestic robot, we introduce a "virtual laser". For the originally solely laserbased motion generation the technique of fusing real laser measurements and 3D point clouds into a continuous data stream is 100% compatible and transparent. The paper covers the general concept, the necessary extrinsic calibration of two very different types of sensors, and exemplarily illustrates the benefit which is to avoid obstacles not being perceivable in the original laser scan.</p>eing perceivable in the original laser scan.</p>)
  • Publications:Evaluation of Cracks in Metallic Material Using a Self-Organized Data-Driven Model of Acoustic Echo-Signal  + (<p>Non-linear acoustic technique is <p>Non-linear acoustic technique is an attractive approach in evaluating early fatigue as well as cracks in material. However, its accuracy is greatly restricted by external non-linearities of ultra-sonic measurement systems. In this work, an acoustical data-driven deviation detection method, called the consensus self-organizing models (COSMO) based on statistical probability models, was introduced to study the evolution of localized crack growth. By using pitch-catch technique, frequency spectra of acoustic echoes collected from different locations of a specimen were compared, resulting in a Hellinger distance matrix to construct statistical parameters such as z-score, p-value and T-value. It is shown that statistical significance p-value of COSMO method has a strong relationship with the crack growth. Particularly, T-values, logarithm transformed p-value, increases proportionally with the growth of cracks, which thus can be applied to locate the position of cracks and monitor the deterioration of materials. © 2018 by the authors. </p>ration of materials. © 2018 by the authors. </p>)
  • Publications:Prototype-Based Contour Detection Applied to Segmentation of Phytoplankton Images  + (<p>Novel prototype-based framework f<p>Novel prototype-based framework for image segmentation is introduced and successfully applied for cell segmentation in microscopy imagery. This study is concerned with precise contour detection for objects representing the Prorocentrum minimum species in phytoplankton images. The framework requires a single object with the ground truth contour as a prototype to perform detection of the contour for the remaining objects. The level set method is chosen as a segmentation algorithm and its parameters are tuned by differential evolution. The fitness function is based on the distance between pixels near contour in the prototype image and pixels near detected contour in the target image. Pixels “of interest correspond to several concentric bands of various width in outer and inner areas, relative to the contour. Usefulness of the introduced approach was demonstrated by comparing it to the basic level set and advanced Weka segmentation techniques. Solving the parameter selection problem of the level set algorithm considerably improved segmentation accuracy.</p>considerably improved segmentation accuracy.</p>)
  • Publications:Detecting Halftone Dots for Offset Print Quality Assessment Using Soft Computing  + (<p>Nowadays in printing industry mos<p>Nowadays in printing industry most of information processing steps are highly automated, except the last one–print quality assessment and control. We present a way to assess one important aspect of print quality, namely the distortion of halftone dots printed colour pictures are made of. The problem is formulated as assessing the distortion of circles detected in microscale images of halftone dot areas. In this paper several known circle detection techniques are explored in terms of accuracy and robustness. We also present a new circle detection technique based on the fuzzy Hough transform (FHT) extended with k-means clustering for detecting positions of accumulator peaks and with an optional fine-tuning step implemented through unsupervised learning. Prior knowledge about the approximate positions and radii of the circles is utilized in the algorithm. Compared to FHT the proposed technique is shown to increase the estimation accuracy of the position and size of detected circles. The techniques are investigated using synthetic and natural images.</p>stigated using synthetic and natural images.</p>)
  • Publications:Advances in computational intelligence-based print quality assessment and control in offset colour printing  + (<p>Nowadays most of information proc<p>Nowadays most of information processing steps in printing industry are highly automated, except the last one – print quality assessment and control. Usually quality assessment is a manual, tedious, and subjective procedure. This article presents a survey of non numerous developments in the field of computational intelligence-based print quality assessment and control in offset colour printing. Recent achievements in this area and advances in applied computational intelligence, expert and decision support systems lay good foundations for creating practical tools to automate the last step of the printing process.</p>omate the last step of the printing process.</p>)
  • Publications:Evaluation of the performance of accelerometer-based gait event detection algorithms in different real-world scenarios using the MAREA gait database  + (<p>Numerous gait event detection (GE<p>Numerous gait event detection (GED) algorithms have been developed using accelerometers as they allow the possibility of long-term gait analysis in everyday life. However, almost all such existing algorithms have been developed and assessed using data collected in controlled indoor experiments with pre-defined paths and walking speeds. On the contrary, human gait is quite dynamic in the real-world, often involving varying gait speeds, changing surfaces and varying surface inclinations. Though portable wearable systems can be used to conduct experiments directly in the real-world, there is a lack of publicly available gait datasets or studies evaluating the performance of existing GED algorithms in various real-world settings.</p><p>This paper presents a new gait database called MAREA (n=20 healthy subjects) that consists of walking and running in indoor and outdoor environments with accelerometers positioned on waist, wrist and both ankles. The study also evaluates the performance of six state-of-the-art accelerometer-based GED algorithms in different real-world scenarios, using the MAREA gait database. The results reveal that the performance of these algorithms is inconsistent and varies with changing environments and gait speeds. All algorithms demonstrated good performance for the scenario of steady walking in a controlled indoor environment with a combined median F1score of 0.98 for Heel-Strikes and 0.94 for Toe-Offs. However, they exhibited significantly decreased performance when evaluated in other lesser controlled scenarios such as walking and running in an outdoor street, with a combined median F1score of 0.82 for Heel-Strikes and 0.53 for Toe-Offs. Moreover, all GED algorithms displayed better performance for detecting Heel-Strikes as compared to Toe-Offs, when evaluated in different scenarios.</p>ared to Toe-Offs, when evaluated in different scenarios.</p>)
  • Publications:Towards a computer-aided diagnosis system for vocal cord diseases  + (<p>OBJECTIVE: The objective of this <p>OBJECTIVE: The objective of this work is to investigate a possibility of creating a computer-aided decision support system for an automated analysis of vocal cord images aiming to categorize diseases of vocal cords. METHODOLOGY: The problem is treated as a pattern recognition task. To obtain a concise and informative representation of a vocal cord image, colour, texture, and geometrical features are used. The representation is further analyzed by a pattern classifier categorizing the image into healthy, diffuse, and nodular classes. RESULTS: The approach developed was tested on 785 vocal cord images collected at the Department of Otolaryngology, Kaunas University of Medicine, Lithuania. A correct classification rate of over 87% was obtained when categorizing a set of unseen images into the aforementioned three classes. CONCLUSION: Bearing in mind the high similarity of the decision classes, the results obtained are rather encouraging and the developed tools could be very helpful for assuring objective analysis of the images of laryngeal diseases.</p>nalysis of the images of laryngeal diseases.</p>)
  • Publications:Structural and Syntactic Techniques for Recognition of Ethiopic Characters  + (<p>OCR technology of Latin scripts i<p>OCR technology of Latin scripts is well advanced in comparison to other scripts. However, the available results from Latin are not always sufficient to directly adopt them for other scripts such as the Ethiopic script. In this paper, we propose a novel approach that uses structural and syntactic techniques for recognition of Ethiopic characters. We reveal that primitive structures and their spatial relationships form a unique set of patterns for each character. The relationships of primitives are represented by a special tree structure, which is also used to generate a pattern. A knowledge base of the alphabet that stores possibly occurring patterns for each character is built. Recognition is then achieved by matching the generated pattern against each pattern in the knowledge base. Structural features are extracted using direction field tensor. Experimental results are reported, and the recognition system is insensitive to variations on font types, sizes and styles.</p> variations on font types, sizes and styles.</p>)
  • Publications:Combining image, voice, and the patient's questionnaire data to categorize laryngeal disorders  + (<p>Objective: This paper is concerne<p>Objective: This paper is concerned with soft computing techniques for categorizing laryngeal disorders based on information extracted from an image of patient's vocal folds, a voice signal, and questionnaire data.</p><p>Methods: Multiple feature sets are exploited to characterize images and voice signals. To characterize colour, texture, and geometry of biological structures seen in colour images of vocal folds, eight feature sets are used. Twelve feature sets are used to obtain a comprehensive characterization of a voice signal (the sustained phonation of the vowel sound /a/). Answers to 14 questions constitute the questionnaire feature set. A committee of support vector machines is designed for categorizing the image, voice, and query data represented by the multiple feature sets into the healthy, nodular and diffuse classes. Five alternatives to aggregate separate SVMs into a committee are explored. Feature selection and classifier design are combined into the same learning process based on genetic search.</p><p>Results: Data of all the three modalities were available from 240 patients. Among those, 151 patients belong to the nodular class, 64 to the diffuse class and 25 to the healthy class. When using a single feature set to characterize each modality, the test set data classification accuracy of 75.0%, 72.1%, and 85.0% was obtained for the image, voice and questionnaire data, respectively. The use of multiple feature sets allowed to increase the accuracy to 89.5% and 87.7% for the image and voice data, respectively. The test set data classification accuracy of over 98.0% was obtained from a committee exploiting multiple feature sets from all the three modalities. The highest classification accuracy was achieved when using the SVM-based aggregation with hyper parameters of the SVM determined by genetic search. Bearing in mind the difficulty of the task, the obtained classification accuracy is rather encouraging.</p><p>Conclusions: Combination of both multiple feature sets characterizing a single modality and the three modalities allowed to substantially improve the classification accuracy if compared to the highest accuracy obtained from a single feature set and a single modality. In spite of the unbalanced data sets used, the error rates obtained for the three classes were rather similar.</p>a sets used, the error rates obtained for the three classes were rather similar.</p>)
  • Publications:Towards emotion recognition for virtual environments : an evaluation of eeg features on benchmark dataset  + (<p>One of the challenges in virtual <p>One of the challenges in virtual environments is the difficulty users have in interacting with these increasingly complex systems. Ultimately, endowing machines with the ability to perceive users emotions will enable a more intuitive and reliable interaction. Consequently, using the electroencephalogram as a bio-signal sensor, the affective state of a user can be modelled and subsequently utilised in order to achieve a system that can recognise and react to the userâs emotions. This paper investigates features extracted from electroencephalogram signals for the purpose of affective state modelling based on Russellâs Circumplex Model. Investigations are presented that aim to provide the foundation for future work in modelling user affect to enhance interaction experience in virtual environments. The DEAP dataset was used within this work, along with a Support Vector Machine and Random Forest, which yielded reasonable classification accuracies for Valence and Arousal using feature vectors based on statistical measurements and band power from the and waves and High Order Crossing of the EEG signal. © 2017, The Author(s).</p>ng of the EEG signal. © 2017, The Author(s).</p>)
  • Publications:A fiber-optic interconnection concept for scaleable massively parallel computing  + (<p>One of the most important feature<p>One of the most important features of interconnection networks for massively parallel computer systems is scaleability. The fiber-optic network described in this paper uses both wavelength division multiplexing and a configurable ratio between optics and electronics to gain an architecture with good scaleability. The network connects distributed modules together to a huge parallel system where each node itself typically consists of parallel processing elements. The paper describes two different implementations of the star topology, one uses an electronic star and fiber optic connections, the other is purely optical with a passive optical star in the center. The medium access control of the communication concept is presented and some scaleability properties are discussed involving also a multiple-star topology.</p>sed involving also a multiple-star topology.</p>)
  • Publications:A Comparative Study of Fingerprint Image-Quality Estimation Methods  + (<p>One of the open issues in fingerp<p>One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system.</p> minutiae-based fingerprint matching system.</p>)
  • Publications:Online Handwriting Recognition of Ethiopic Script  + (<p>Online recognition of handwritten<p>Online recognition of handwritten characters is gaining a renewed interest as it provides a natural way of data entry for a wide variety of handheld devices. In this paper, we present online handwriting recognition system for Ethiopic script based on the structural and syntactical analysis of the strokes forming characters. The complex structures of characters are represented by the spatio- temporal relationships of simple-shaped strokes called primitives. A special tree structure is used to model spatio- temporal relationships of the strokes. The tree generates a unique set of primitive stroke sequences for each character, and for recognition each stroke sequence is matched against a stored knowledge base. Characters are also classified based on their structural similarity to select a plausible set of characters for un unknown input, which improves recognition and processing time. We also present a dataset collected for training and testing online recognition systems for Ethiopic script. The dataset is prepared in accordance with the international standard UNIPEN format. The recognition system is tested with the collected dataset and experimental results are reported.</p>taset and experimental results are reported.</p>)
  • Publications:Self-organized Modeling for Vehicle Fleet Based Fault Detection  + (<p>Operators of fleets of vehicles d<p>Operators of fleets of vehicles desire the best possible availability and usage of their vehicles. This means the preference is that maintenance of a vehicle is scheduled with as long intervals as possible. However, it is then important to be able to detect if a component in a specific vehicle is not functioning properly earlier than expected (due to e.g. manufacturing variations). This paper proposes a telematic based fault detection scheme for enabling fault detection for diagnostics by using a population of vehicles. The basic idea is that it is possible to create low-dimensional representations of a sub-system or component in a vehicle, where the representation (or model parameters) of a vehicle can be monitored for changes compared to the model parameters observed in a fleet of vehicles. If a model in a vehicle is found to deviate compared to a group of models from a fleet of vehicles, then the vehicle is judged to need diagnostics for that component (assuming the deviation in the model cannot be attributed to e.g. a different driver behavior). The representation should be low-dimensional so it is possible to have it transferred over a limited wireless communication channel to a communications center where the comparison is made. The algorithm is shown to be able to detect leakage on simulated data from a cooling system, work is currently in progress for detecting other types of faults.</p>rogress for detecting other types of faults.</p>)
  • Publications:An adaptive functional model of the filtering system of the cochlea of the inner ear  + (<p>Outer hair cells (OHC) in the coc<p>Outer hair cells (OHC) in the cochlea of the inner ear, together with the local structures of the basilar membrane, reticular lamina and tectorial membrane, constitute the adaptive primary filters (PF) of the second order. We used them for designing a serial-parallel signal filtering system. We determined a rational number of PF included in Gaussian channels of the system, summation weights of the output signals, and distribution of PF along the basilar membrane. A Gaussian channel consisting of five PF is presented as an example, and properties of the channel operating in the linear and non-linear mode are determined during adaptation and under efferent control. The results suggest that application of biological filtering principles can be useful for designing of cochlear implants with new strategies of speech encoding.</p>ants with new strategies of speech encoding.</p>)
  • Publications:An adaptive panoramic filter bank as a qualitative model of the filtering system of the cochlea : The peculiarities in linear and nonlinear mode  + (<p>Outer hair cells in the cochlea o<p>Outer hair cells in the cochlea of the ear, together with the local structures of the basilar membrane, reticular lamina and tectorial membrane constitute the adaptive primary filters (PF) of the second order. We used them for designing a serial-parallel signal filtering system. We determined a rational number of the PF included in Gaussian channels of the system, summation weights of the output signals, and distribution of the PF along the basilar membrane. A Gaussian panoramic filter bank each channel of which consists of five PF is presented as an example. The properties of the PF, the channel and the filter bank operating in the linear and nonlinear modes are determined during adaptation and under efferent control. The results suggest that application of biological filtering principles can be useful for designing cochlear implants with new speech encoding strategies.</p>mplants with new speech encoding strategies.</p>)
  • Publications:Towards Understanding ICU Treatments Using Patient Health Trajectories  + (<p>Overtreatment or mistreatment of <p>Overtreatment or mistreatment of patients is a phenomenon commonly encountered in health care and especially in the Intensive Care Unit (ICU) resulting in increased morbidity and mortality. We explore the MIMIC-III intensive care unit database and conduct experiments on an interpretable feature space based on the fusion of severity subscores, commonly used to predict mortality in an ICU setting. Clustering of medication and procedure context vectors based on a semantic representation has been performed to find common and individual treatment patterns. Two-day patient health state trajectories of a cohort of congestive heart failure patients are clustered and correlated with the treatment and evaluated based on an increase or reduction of probability of mortality on the second day of stay. Experimental results show differences in treatments and outcomes and the potential for using patient health state trajectories as a starting point for further evaluation of medical treatments and interventions. © Springer Nature Switzerland AG 2019.</p>ions. © Springer Nature Switzerland AG 2019.</p>)
  • Publications:Incremental classification of process data for anomaly detection based on similarity analysis  + (<p>Performance evaluation and anomal<p>Performance evaluation and anomaly detection in complex systems are time consuming tasks based on analyzing, similarity analysis and classification of many different data sets from real operations. This paper presents an original computational technology for unsupervised incremental classification of large data sets by using a specially introduced similarity analysis method. First of all the so called compressed data models are obtained from the original large data sets by a newly proposed sequential clustering algorithm. Then the datasets are compared by pairs not directly, but by using their respective compressed data models. The evaluation of the pairs is done by a special similarity analysis method that uses the so called Intelligent Sensors (Agents) and data potentials. Finally a classification decision is generated by using a predefined threshold of similarity. The applicability of the proposed computational scheme for anomaly detection, based on many available large data sets is demonstrated on an example of 18 synthetic data sets. Suggestions for further improvements of the whole computation technology and a better applicability are also discussed in the paper.</p>plicability are also discussed in the paper.</p>)
  • Publications:Damascening video databases for evaluation of face tracking and recognition -- The DXM2VTS database  + (<p>Performance quantification of bio<p>Performance quantification of biometric systems, such as face tracking and recognition highly depend on the database used for testing the systems. Systems trained and tested on realistic and representative databases evidently perform better. Actually, the main reason for evaluating any system on test data is that these data sets represent problems that systems might face in the real world. However, building biometric video databases with realistic background for testing is expensive especially due to its high demand of cooperation from the side of the participants. For example, XM2VTS database contain thousands of video recorded in a studio from 295 subjects. Recording these subjects repeatedly in public places such as supermarkets, offices, streets, etc., is not realistic. To this end, we present a procedure to separate the background of a video recorded in studio conditions with the purpose to replace it with an arbitrary complex background, e.g., outdoor scene containing motion, to measure performance, e.g., eye tracking. Furthermore, we present how an affine transformation and synthetic noise can be incorporated into the production of the new database to simulate natural noise, e.g. motion blur due to translation, zooming and rotation. The entire system is applied to the XM2VTS database, which already consists of several terabytes of data, to produce the DXM2VTS–Damascened XM2VTS database essentially without an increase in resource consumption, i.e., storage, bandwidth, and most importantly, the time of clients populating the database, and the time of the operators.</p>the database, and the time of the operators.</p>)
  • Publications:Periocular Biometrics : Databases, Algorithms and Directions  + (<p>Periocular biometrics has been es<p>Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.</p>nds are also briefly discussed. © 2016 IEEE.</p>)
  • Publications:Periocular biometrics : Databases, Algorithms and Directions  + (<p>Periocular biometrics has been es<p>Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a tradeoff between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed.</p> research trends are also briefly discussed.</p>)
  • Publications:An Overview of Periocular Biometrics  + (<p>Periocular biometrics specificall<p>Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.</p>cations becoming prevalent on these devices.</p>)
  • Publications:Cross-Spectral Biometric Recognition with Pretrained CNNs as Generic Feature Extractors  + (<p>Periocular recognition has gained<p>Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than face or iris. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the ImageNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ 2 distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.</p>rification with respect to previous studies.</p>)
  • Publications:Cross Spectral Periocular Matching using ResNet Features  + (<p>Periocular recognition has gained<p>Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than other ocular modalities. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the ImageNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ <sup>2</sup> distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.</p>-spectral verification with respect to previous studies.</p>)
  • Publications:Near-infrared and visible-light periocular recognition with Gabor features using frequency-adaptive automatic eye detection  + (<p>Periocular recognition has gained<p>Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available. © The Institution of Engineering and Technology 2015.</p>titution of Engineering and Technology 2015.</p>)