Property:Abstract

From ISLAB/CAISR
Jump to navigationJump to search

This is a property of type Text.

Showing 20 pages using this property.
I
<p>We investigate the estimation of illuminance flow using Histograms of Oriented Gradient features (HOGs). In a regression setting, we found for both ridge regression and support vector machines, that the optimal solution shows close resemblance to the gradient based structure tensor (also known as the second moment matrix). Theoretical results are presented showing in detail how the structure tensor and the HOGs are connected. This relation will benefit computer vision tasks such as affine invariant texture/object matching using HOGs. Several properties of HOGs are presented, among others, how many bins are required for a directionality measure, and how to estimate HOGs through spatial averaging that requires no binning.</p>  +
<p>This book constitutes the refeered proceedings of the 13th Scandinavian Conference on Image Analysis, SCIA 2003, held in Halmstad, Sweden in June/July 2003.The 148 revised full papers presented together with 6 invited contributions were carefully reviewed and selected for presentation. The papers are organized in topical sections on feature extraction, depth and surface, shape analysis, coding and representation, motion analysis, medical image processing, color analysis, texture analysis, indexing and categorization, and segmentation and spatial grouping.</p>  +
<p>Automatic inspection of printed multicoloured screen pictures demands methods for colour classification of screen dots and pans of screen dots directly in an arbitrary picture. The paper describes a technique using colour image analysis and artificial neural network for inverse colour separation. For every arbitrary small part of a coloured picture it is determined which coloured inks that have been printed in that part. Special attention is paid to the problem of seperating between black colour produced by black ink and black colour produced by combining cyan, magenta and yellow ink. The technique is tested on multicoloured newsprint and a high correct colour classification rate has been demonstrated.</p>  +
<p>We present an image analysis and fuzzy integration based option for the assessment of print quality in rotogravure printing. Values of several print distortion attributes are evaluated employing image analysis procedures and then are aggregated into an overall print quality measure using fuzzy integration. The experimental investigations performed have shown that the print quality evaluations provided by the measure correlate well with the print quality rankings obtained from the expert. The developed tools are successfully used in printing shops for routine print quality control.</p>  +
<p>This paper concentrates on an automated analysis of laryngeal images aiming to categorize the images into three decision classes, namely healthy, nodular and diffuse. The problem is treated as an amage analysis and classification task. To obtain a comprehensive description of laryngeal images, multiple feature sets exploiting information on image colour, texture, geometry, image intensity gradient direction, and frequency content are extracted. A separate support vector machine (SVM) is used to categorize features of each type into decision classes. The final image categorization is then obtained which is based on the decisions provided by a committee of support vector machines. Bearing in mind a high similarity of the decision classes, the correct classification rate of over 94 % is obtained while testing the system on 785 laryngeal images that are recorded by the Department of Otolaryngology, Kaunas University of Medicine is rather promising.</p>  +
<p>Since its beginning in 2003, the International Summer School on Biometrics proved to be a unique forum, where advanced research students and lecturers in biometrics gathered together for a full week of study in several aspects of the science and technology of biometric recognition. This special issue includes some contributions of the school lecturers and the school students.</p><p>The papers included represent the diversity of the current issues of biometric technologies.</p><p>Three papers are related to the use of face images in recognition, but investigate different problems. One is on the mechanisms of human face perception to define some guidelines for automatic recognition systems. The second paper proposes a novel technique for liveness detection from the face motion. The third paper addresses the face spoofing problem, proposing a methodology to improve the recognition performances despite impostor attacks.</p><p>One paper addresses fingerprint matching, which is a classical biometric modality, but from a new perspective. In particular the modeling of fingerprint skin elasticity and distortions is discussed in detail with a view to enhance practical applications.</p><p>Handwriting verification is addressed from a multi-modal and multi-algorithmic perspective. Two papers address the system security. The first uses multiple modalities and watermarking techniques applied to biometric templates. The last paper discusses the introduction of biometric data in e-passports, which is one of the most recent applications of biometrics.</p><p>The school meetings have been a rare opportunity to evaluate different technological challenges and the current advances while sharing existing know-how in tutorials. The latter is important as the amount of studies have been massive in the field. However, the hope has been to give not only new insights, but also to promote the application of a research which is already mature in many respects to offer an answer to a variety of problems, ranging from security enforcement to advanced man machine interfaces.</p>  
<p>In this work, we implement the floating base prioritized whole-body compliant control framework described in Sentis et al. (IEEE Transactions on Robotics 26(3):483–501, 2010) on a wheeled humanoid robot maneuvering in sloped terrains. We then test it for a variety of compliant whole-body behaviors including balance and kinesthetic mobility on irregular terrain, and Cartesian hand position tracking using the co-actuated (i.e. two joints are simultaneously actuated with one motor) robot’s upper body. The implementation serves as a hardware proof for a variety of whole-body control concepts that had previously been developed and tested in simulation. First, behaviors of two and three priority tasks are implemented and successfully executed on the humanoid hardware. In particular, first and second priority tasks are linearized in the task space through model feedback and then controlled through task accelerations. Postures, on the other hand, are shown to be asymptotically stable when using prioritized whole-body control structures and then successfully tested in the real hardware. To cope with irregular terrains, the base is modeled as a six degree of freedom floating system and the wheels are characterized through contact and rolling constraints. Finally, center of mass balance capabilities using whole-body compliant control and kinesthetic mobility are implemented and tested in the humanoid hardware to climb terrains with various slopes.</p>  +
<p>Physiological data such as head movements can be used to intuitively control a companion robot to perform useful tasks. We believe that some tasks such as reaching for high objects or getting out of a person’s way could be accomplished via size changes, but such motions should not seem threatening or bothersome. To gain insight into how size changes are perceived, the Think Aloud Method was used to gather typical impressions of a new robotic prototype which can expand in height or width based on a user’s head movements. The results indicate promise for such systems, also highlighting some potential pitfalls.</p>  +
<p>A comparison of two automatic peak detection algorithms is presented. One algorithm comes with the Voyager 5 Data Explorer<sup>TM</sup> program, the other is a new algorithm called Pepex® (short for PEptide Peak Extractor) from BioBridge Computing. The peak sets selected with both tools have been compared, against each other and against manual peak selections, on a large set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. It is shown how much variation there is in peak sets, both when selected by human operators and when selected by automatic peak detection algorithms. This variation has been used as an advantage to gain significantly better protein identification results, using the Pepex tool, than what an experienced mass spectroscopist has achieved on the same data. The strongest improvement has been observed in weak spectra, where the signal peak intensities are low.</p>  +
<p>Latent fingerprints are usually processed with Automated Fingerprint Identification Systems (AFIS) by law enforcement agencies to narrow down possible suspects from a criminal database. AFIS do not commonly use all discriminatory features available in fingerprints but typically use only some types of features automatically extracted by a feature extraction algorithm. In this work, we explore ways to improve rank identification accuracies of AFIS when only a partial latent fingerprint is available. Towards solving this challenge, we propose a method that exploits extended fingerprint features (unusual/rare minutiae) not commonly considered in AFIS. This new method can be combined with any existing minutiae-based matcher. We first compute a similarity score based on least squares between latent and tenprint minutiae points, with rare minutiae features as reference points. Then the similarity score of the reference minutiae-based matcher at hand is modified based on a fitting error from the least square similarity stage. We use a realistic forensic fingerprint casework database in our experiments which contains rare minutiae features obtained from Guardia Civil, the Spanish law enforcement agency. Experiments are conducted using three minutiae-based matchers as a reference, namely: NIST-Bozorth3, VeriFinger-SDK and MCC-SDK. We report significant improvements in the rank identification accuracies when these minutiae matchers are augmented with our proposed algorithm based on rare minutiae features.</p>  +
<p>Fingerprint alteration is a type of presentation attack in which the attacker strives to avoid identification, e.g. at border control or in forensic investigations. As a countermeasure, fingerprint alteration detection aims to automatically discover the occurrence of such attacks by classifying fingerprint images as ’normal’ or ’altered’. In this paper, we propose four new features for improving the performance of fingerprint alteration detection modules. We evaluate the usefulness of these features on a benchmark and compare them to four existing features from the literature. © Copyright 2015 IEEE - All rights reserved.</p>  +
<p>Relaxed acquisition conditions in iris recognition systems have significant effects on the quality and resolution of acquired images, which can severely affect performance if not addressed properly. Here, we evaluate two trained super-resolution algorithms in the context of iris identification. They are based on reconstruction of local image patches, where each patch is reconstructed separately using its own optimal reconstruction function. We employ a database of 1,872 near-infrared iris images (with 163 different identities for identification experiments) and three iris comparators. The trained approaches are substantially superior to bilinear or bicubic interpolations, with one of the comparators providing a Rank-1 performance of ∼88% with images of only 15×15 pixels, and an identification rate of 95% with a hit list size of only 8 identities.</p>  +
<p>An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single “true” peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.</p>  +
<p>It is fully appreciated that progress in the development of data driven approaches to activity recognition are being hampered due to the lack of large scale, high quality, annotated data sets. In an effort to address this the Open Data Initiative (ODI) was conceived as a potential solution for the creation of shared resources for the collection and sharing of open data sets. As part of this process, an analysis was undertaken of datasets collected using a smart environment simulation tool. A noticeable difference was found in the first 1-2 cycles of users generating data. Further analysis demonstrated the effects that this had on the development of activity recognition models with a decrease of performance for both support vector machine and decision tree based classifiers. The outcome of the study has led to the production of a strategy to ensure an initial training phase is considered prior to full scale collection of the data.</p>  +
<p>In the automotive industry, cost effective methods for predictive maintenance are increasingly in demand. The traditional approach for developing diagnostic methods on commercial vehicles is heavily based on knowledge of human experts, and thus it does not scale well to modern vehicles with many components and subsystems. In previous work we have presented a generic self-organising approach called COSMO that can detect, in an unsupervised manner, many different faults. In a study based on a commercial fleet of 19 buses operating in Kungsbacka, we have been able to predict, for example, fifty percent of the compressors that break down on the road, in many cases weeks before the failure.</p><p>In this paper we compare those results with a state of the art approach currently used in the industry, and we investigate how features suggested by experts for detecting compressor failures can be incorporated into the COSMO method. We perform several experiments, using both real and synthetic data, to identify issues that need to be considered to improve the accuracy. The final results show that the COSMO method outperforms the expert method.</p>  +
<p>This paper presents a colour image segmentation method which attains a high segmentation accuracy even when regions of the image that have to be separated are very similar in colour. The proposed method classifies pixels into colour classes. Competitive learning with `conscience' is used to learn reference patterns for the different colour classes. A nearest neighbour classification rule followed by a block of fuzzy post-processing attains a high classification accuracy even for very similar colour classes. A correct classification rate of 97.8% has been achieved when classifying two very similar black colours, namely, the black printed with a black ink and the black printed with a mixture of cyan, magenta and yellow inks.</p>  +
<p>This paper is concerned with an approach to exploiting information available from the co-occurrence matrices computed for different distance parameter values. A polynomial of degree n is fitted to each of 14 Haralick's coefficients computed from the average co-occurrence matrices evaluated for several distance parameter values. Parameters of the polynomials constitute a set of new features. The experimental investigations performed substantiated the usefulness of the approach.</p>  +
<p>Discovering causal relations from limited amounts of data can be useful for many applications. However, all causal discovery algorithms need huge amounts of data to estimate the underlying causal graph. To alleviate this gap, this paper proposes a novel visualization tool which incrementally discovers causal relations as more data becomes available. That is, we assume that stronger causal links will be detected quickly and weaker links revealed when enough data is available. In addition to causal links, the correlation between variables and the uncertainty of the strength of causal links are visualized in the same graph. The tool is illustrated on three example causal graphs, and results show that incremental discovery works and that the causal structure converges as more data becomes available. © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.</p>  +
<p>Performance evaluation and anomaly detection in complex systems are time consuming tasks based on analyzing, similarity analysis and classification of many different data sets from real operations. This paper presents an original computational technology for unsupervised incremental classification of large data sets by using a specially introduced similarity analysis method. First of all the so called compressed data models are obtained from the original large data sets by a newly proposed sequential clustering algorithm. Then the datasets are compared by pairs not directly, but by using their respective compressed data models. The evaluation of the pairs is done by a special similarity analysis method that uses the so called Intelligent Sensors (Agents) and data potentials. Finally a classification decision is generated by using a predefined threshold of similarity. The applicability of the proposed computational scheme for anomaly detection, based on many available large data sets is demonstrated on an example of 18 synthetic data sets. Suggestions for further improvements of the whole computation technology and a better applicability are also discussed in the paper.</p>  +
<p>The heavy vehicle industry has today no requirement to provide a tire pressure monitoring system by law. This has created issues surrounding unknown tire pressure and thread depth during active service. There is also no standardization for these kind of systems which means that different manufacturers and third party solutions work after their own principles and it can be hard to know what works for a given vehicle type. The objective is to create an indirect tire monitoring system that can generalize a method that detect both incorrect tire pressure and thread depth for different type of vehicles within a fleet without the need for additional physical sensors or vehicle specific parameters. The existing sensors that are connected communicate through CAN and are interpreted by the Drivec Bridge hardware that exist in the fleet. By using supervised machine learning a classifier was created for each axle where the main focus was the front axle which had the most issues. The classifier will classify the vehicles tires condition and will be implemented in Drivecs cloud service where it will receive its data. The resulting classifier is a random forest implemented in Python. The result from the front axle with a data set consisting of 9767 samples of buses with correct tire condition and 1909 samples of buses with incorrect tire condition it has an accuracy of 90.54% (0.96%). The data sets are created from 34 unique measurements from buses between January and May 2017. This classifier has been exported and is used inside a Node.js module created for Drivecs cloud service which is the result of the whole implementation. The developed solution is called Indirect Tire Monitoring System (ITMS) and is seen as a process. This process will predict bad classes in the cloud which will lead to warnings. The warnings are defined as incidents. They contain only the information needed and the bandwidth of the incidents are also controlled so incidents are created within an acceptable range over a period of time. These incidents will be notified through the cloud for the operator to analyze for upcoming maintenance decisions.</p>