Property:Abstract

From ISLAB/CAISR
Jump to navigationJump to search

This is a property of type Text.

Showing 20 pages using this property.
R
<p>It has been proposed that it should be possible to identify patterns if daily occupations that promote health or cause illness. This study aimed to develop and to evaluate a process for analysing and characterising subjectively perceived patterns of daily occupations, by describing patterns as consisting if main, hidden, and unexpected occupations. Yesterday diaries describing one day if 100 working married mothers were collected through interviews. The diaries were transformed into time-and-occupation graphs. An analysis based on visual interpretation of the patterns was performed. The graphs were grouped into the categories low, medium, or high complexity. In order to identify similarities the graphs were then compared both pair-wise and group-wise. Finally, the complexity and similarities perspectives were integrated, identifying the most typical patterns of daily occupations representing low, medium, and high complexity. Visual differences in complexity were evident. In order to validate the Recognition of Similarities (ROS) process developed, a measure expressing the probability if change was computed. This probability was found to differ statistically significantly between the three groups, supporting the validity of the ROS process.</p>  +
<p>It has been proposed that it should be possible to identify patterns if daily occupations that promote health or cause illness. This study aimed to develop and to evaluate a process for analysing and characterising subjectively perceived patterns of daily occupations, by describing patterns as consisting if main, hidden, and unexpected occupations. Yesterday diaries describing one day if 100 working married mothers were collected through interviews. The diaries were transformed into time-and-occupation graphs. An analysis based on visual interpretation of the patterns was performed. The graphs were grouped into the categories low, medium, or high complexity. In order to identify similarities the graphs were then compared both pair-wise and group-wise. Finally, the complexity and similarities perspectives were integrated, identifying the most typical patterns of daily occupations representing low, medium, and high complexity. Visual differences in complexity were evident. In order to validate the Recognition of Similarities (ROS) process developed, a measure expressing the probability if change was computed. This probability was found to differ statistically significantly between the three groups, supporting the validity of the ROS process. © 2004, Taylor & Francis Group, LLC. All rights reserved.</p>  +
<p>As iris systems evolve towards a more relaxed acquisition, low image resolution will be a predominant issue. In this paper we evaluate a super-resolution method to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. We employ a database of 560 images captured in visible spectrum with two smartphones. The presented approach is superiorto bilinear or bicubic interpolation, especially at lower resolutions. We also carry out recognition experiments with six iris matchers, showing that better performance can be obtained at low-resolutions with the proposed eigen-patch reconstruction, with fusion of only two systems pushing the EER to below 5-8% for down-sampling factors up to a size of only 13x13.</p>  +
<p>When selecting a registration method for fingerprints, the choice is often between a minutiae based or an orientation field based registration method. In selecting a combination of both methods, instead of selecting one of the methods, we obtain a one modality multi-expert registration system. If the combined methods are based on di#erent features in the fingerprint, e.g. the minutiae points respective the orientation field, they are uncorrelated and a higher registration performance can be expected compared to when only one of the methods are used. In this paper two registration methods are discussed that do not use minutiae points, and are therefore candidates to be combined with a minutiae based registration method to build a multi-expert registration system for fingerprints with expected high registration performance. Both methods use complex orientations fields but produce uncorrelated results by construction. One method uses the position and geometric orientation of symmetry points, i.e. the singular points (SPs) in the fingerprint to estimate the translation respectively the rotation parameter in the Euclidean transformation. The second method uses 1D projections of orientation images to find the transformation parameters. Experimental results are reported.</p>  +
<p>This paper presents the use of the parametric proportional hazard model (PHM) for reliability ranking of power cables. Here, the Weibull PHM is used to estimate the failure rate of every individual cable based on the age of the cables and a set of explanatory factors. The required information for the proposed method is obtained by exploiting available historical data. This data-driven method does not require any additional measurements on the cables. After individual failure rate estimation, the cables are ranked for maintenance prioritization and repair actions.</p><p>Furthermore, the results of reliability analysis of power cables when considered as repairable or non-repairable components are compared. The paper demonstrates that the methods which estimate the time-to-the-first failure (for non-repairable components) leads to incorrect conclusions about reliability of repairable power cables. The results show that the conclusions about different factors in PHM and cables ranking will be misleading if the cables are considered as non-repairable components. The proposed method is used to calculate the failure rate of each individual Paper Insulated Lead Cover (PILC) underground cables in a distribution grid in the south of Sweden.</p>  +
<p>Underground power cables are one of the fundamental elements in power grids, but also one of the more difficult ones to monitor. Those cables are heavily affected by ionization, as well as thermal and mechanical stresses. At the same time, both pinpointing and repairing faults is very costly and time consuming. This has caused many power distribution companies to search for ways of predicting cable failures based on available historical data.</p><p>In this paper, we investigate five different models estimating the probability of failures for in-service underground cables. In particular, we focus on a methodology for evaluating how well different models fit the historical data. In many practical cases, the amount of data available is very limited, and it is difficult to know how much confidence should one have in the goodness-of-fit results.</p><p>We use two goodness-of-fit measures, a commonly used one based on mean square error and a new one based on calculating the probability of generating the data from a given model. The corresponding results for a real data set can then be interpreted by comparing against confidence intervals obtained from synthetic data generated according to different models.</p><p>Our results show that the goodness-of-fit of several commonly used failure rate models, such as linear, piecewise linear and exponential, are virtually identical. In addition, they do not explain the data as well as a new model we introduce: piecewise constant.</p>  +
<p>A diagnosis and maintenance method, a diagnosis and maintenance assembly comprising a central server and a system, and a computer program for diagnosis and maintenance for a plurality of systems, particularly for a plurality of vehicles, wherein each system provides at least one system-related signal which provides the basis for the diagnosis and/or maintenance of/for the system are provided. The basis for diagnosis and/or maintenance is determined by determining for each system at least one relation between the system-related signals, comparing the compatible determined relations, determining for the plurality of systems based on the result of the comparison which relations are significant relations and providing a diagnosis and/or maintenance decision based on the determined significant relations.</p>  +
<p>A diagnosis and maintenance method, a diagnosis and maintenance assembly comprising a central server and a system, and a computer program for diagnosis and maintenance for a plurality of systems, particularly for a plurality of vehicles, wherein each system provides at least one system-related signal which provides the basis for the diagnosis and/or maintenance of/for the system are provided. The basis for diagnosis and/or maintenance is determined by determining for each system at least one relation between the system-related signals, comparing the compatible determined relations, determining for the plurality of systems based on the result of the comparison which relations are significant relations and providing a diagnosis and/or maintenance decision based on the determined significant relations.</p>  +
<p>Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases.</p>  +
<p>A robust air/fuel ratio “soft sensor” is presented based on non-linear signal processing of the ion current signal using neural networks. Care is taken to make the system insensitive to amplitude variations, due to e.g. fuel additives, by suitable preprocessing of the signal.</p>  +
<p>A method for robust tuning of individual cylinders air-fuel ratio is proposed. The fuel injection is adjusted so that each cylinder has the same air-fuel ratio in inner control loops, and the resulting air-fuel ratio in the exhaust pipe is controlled with an exhaust gas oxygen sensor (EGO) in an outer control loop to achieve stoichiometric air-fuel ratio. Correction factors to provide cylinder individual fuel injection timing are calculated based on measurements of the ion currents for the individual cylinders. An implementation in a production vehicle is shown with results from driving on the highway.</p>  +
<p>Recent research has found deep neural networks to be vulnerable, by means of prediction error, to images corrupted by small amounts of non-random noise. These images, known as adversarial examples are created by exploiting the input to output mapping of the network. For the MNIST database, we observe in this paper how well the known regularization/robustness methods improve generalization performance of deep neural networks when classifying adversarial examples and examples perturbed with random noise. We conduct a comparison of these methods with our proposed robustness method, an ensemble of models trained on adversarial examples, able to clearly reduce prediction error. Apart from robustness experiments, human classification accuracy for adversarial examples and examples perturbed with random noise is measured. Obtained human classification accuracy is compared to the accuracy of deep neural networks measured in the same experimental settings. The results indicate, human performance does not suffer from neural network adversarial noise.</p>  +
<p>This paper describes a method of detecting parallel rows on an agricultural field using an omnidirectional camera. The method works both on cameras with a fisheye lens and cameras with a catadioptric lens. A combination of an edge based method and a Hough transform method is suggested to find the rows. The vanishing point of several parallel rows is estimated using a second Hough transform. The method is evaluated on synthetic images generated with calibration data from real lenses. Scenes with several rows are produced, where each plant is positioned with a specified error. Experiments are performed on these synthetic images and on real field images. The result shows that good accuracy is obtained on the vanishing point once it is detected correctly. Further it shows that the edge based method works best when the rows consists of solid lines, and the Hough method works best when the rows consists of individual plants. The experiments also show that the combined method provides better detection than using the methods separately.</p>  +
S
<p>Symmetry Assessment by Finite Expansion (SAFE) is a novel description of image information by means of Generalized Structure Tensor. It represents orientation data in neighbourhood of key points projected onto the space of harmonic functions creating a geometrically interpretable feature of low dimension. The proposed feature has built in quality metrics reflecting accuracy of the extracted feature and ultimately the quality of the key point. The feature vector is orientation invariant in that it is orientation steerable with low computational cost. We provide experiments on minutia key points of forensic fingerprints to demonstrate its usefulness. Matching is performed based on minutia in regions with high orientation variance, e.g. in proximity of core points. Performance of single matching minutia equals to 20% EER and Rank-20 CMC 69% on the only publicly available annotated forensic fingerprint SD27 database.</p><p>Further, we complement SAFE descriptors of orientation maps with SAFE descriptors of frequency features in a similar manner. In case of combined features the performance is improved further to 19% EER and 74% Rank-20 CMC. © 2014 IEEE.</p>  +
<p>The Gabor decomposition is a ubiquitous tool in computer vision. Nevertheless, it is generally considered computationally demanding for active vision applications. We suggest an attention-driven approach to feature detection inspired by the human saccadic system. A dramatic speedup is achieved by computing the Gabor decomposition only on the points of a sparse retinotopic grid. An off-line eye detection application and a real-time head localisation and tracking system are presented. The real-time system features a novel eyeball-mounted camera designed to simulate the dynamic performance of the human eye and is, to the best of our knowledge, the first example of active vision system based on the Gabor decomposition.</p>  +
<p>In this paper we present a new and uniform way of evaluate 3D sensor performance. It is rare that standardized test specifications are used in research on mobile robots. A test rig with objects in the industrial safety standard Safety of industrial trucks - driverless trucks and their systems EN1525 is extended by thin vertical and horizontal objects that represent a fork on a forklift, a ladder and a hanging cable. A comparison of atrinocular stereo vision system, a 3D TOF (Time- Of-Flight) range camera and a Kinect device is made to verify the use of the test rig. All sensors detect the objects in the safety standard EN1525. The Kinect and 3D TOF camera shows reliable results for the objects in the safety standard at distances up to 5 m. The trinocular system is the only sensor in the test that detects the thin structures. The proposed test rig can be used to evaluate sensors to detect thin structures.</p>  +
<p>This paper is concerned with a multi–resolution tool for screening paper formation variations in various frequency regions on production line. A paper web is illuminated by two red diode lasers and the reflected light recorded as two time series of high resolution measurements constitute the input signal to the papermaking process monitoring system. The time series are divided into blocks and each block is analyzed separately. The task is treated as kernel based novelty detection applied to a multi–resolution time series representation obtained from the band-pass filtering of the Fourier power spectrum of the series. The frequency content of each frequency region is characterized by a feature vector, which is transformed using the canonical correlation analysis and then categorized into the inlier or outlier class by the novelty detector. The ratio of outlying data points, significantly exceeding the predetermined value, indicates abnormalities in the paper formation. The tools developed are used for online paper formation monitoring in a paper mill.</p>  +
<p>This paper is concerned with data mining techniques for identifying the main parameters of the printing press, the printing process and paper affecting the occurrence of paper web breaks in a pressroom.Two approaches are explored. The first one treats the problem as a task of data classification into “break” and “non break” classes. The procedures of classifier design and selection of relevant input variables are integrated into one process based on genetic search. The search process results in a set of input variables providing the lowest average loss incurred in taking decisions. The second approach, also based on genetic search, combines procedures of input variable selection and data mapping into a low dimensional space. The tests have shown that the web tension parameters are amongst the most important ones. It was also found that, provided the basic off-line paper parameters are in an acceptable range, the paper related parameters recorded online contain more information for predicting the occurrence of web breaks than the off-line ones. Using the selected set of parameters, on average, 93.7% of the test set data were classified correctly. The average classification accuracy of the break cases was equal to 76.7%.</p>  +
<p>The objective of this work is to identify the main parameters of the printing press, the printing process, and the paper affecting the occurrence of web breaks in a pressroom. Two approaches are explored. The first one treats the problem as a task of data classification into "break" and "non-break" classes. The procedures of classifier design and selection of relevant input variables are integrated into one process based on genetic search. The second approach, targeted for data visualization and also based on genetic search, combines procedures of input variable selection and data mapping into a two-dimensional space. The genetic search-based analysis has shown that the web tension parameters are amongst the most important ones. It was also found that the group of paper related parameters recorded online contain more information for predicting the occurrence of web breaks than the group of traditional parameters recorded off-line at a paper lab. Using the selected set of parameters, on average, 93.7% of the test set data were classified correctly. The average classification accuracy of web break cases was equal to 76.7%. (C) 2010 Elsevier B. V. All rights reserved.</p>  +