Property:Abstract

From ISLAB/CAISR
Jump to navigationJump to search

This is a property of type Text.

Showing 20 pages using this property.
S
<p>This paper presents three schemes for soft fusion of outputs of multiple neural classifiers. The weights assigned to classifiers or groups of them are data dependent. The first scheme performs linear combination of outputs of classifiers and, in fact, is the BADD defuzzification strategy. The second approach involves calculation of fuzzy integrals. The last scheme performs weighted averaging with data dependent weights. An empirical evaluation using widely accessible data sets substantiates the validity of the approaches with data dependent weights compared to various existing combination schemes of multiple classifiers. The majority rule, combination by averaging, the weighted averaging, the Borda count, and the fuzzy integral have been used for the comparison.</p>  +
<p>Two spark advance control systems are outlined; both based on feedback from nonlinear neural network soft sensors and ion current detection. One uses an estimate on the location of the pressure peak and the other uses an estimate of the location of the center of combustion. Both quantities are estimated from the ion current signal using neural networks. The estimates are correct within roughly two crank angle degrees when evaluated on a cycle to cycle basis, and roughly within one crank angle degree when the quantities are averaged over consecutive cycles.</p><p>The pressure peak detection based control system is demonstrated on a SAAB 9000 car, equipped with a 2.3 liter low-pressure turbo charged engine, during normal highway driving.</p>  +
<p>This paper proposes a new robust bi-modal audio visual digit and speaker recognition system by lip-motion and speech biometrics. To increase the robustness of digit and speaker recognition, we have proposed a method using speaker lip motion information extracted from video sequences with low resolution (128 ×128 pixels). In this paper we investigate a biometric system for digit recognition and speaker identification based using line-motion estimation with speech information and Support Vector Machines. The acoustic and visual features are fused at the feature level showing favourable results with digit recognition being 83% to 100% and speaker recognition 100% on the XM2VTS database.</p>  +
<p>Health technological systems learning from and reacting on how humans behave in sensor equipped environments are today being commercialized. These systems rely on the assumptions that training data and testing data share the same feature space, and residing from the same underlying distribution - which is commonly unrealistic in real-world applications. Instead, the use of transfer learning could be considered. In order to transfer knowledge between a source and a target domain these should be mapped to a common latent feature space. In this work, the dimensionality reduction algorithm t-SNE is used to map data to a similar feature space and is further investigated through a proposed novel analysis of output stability. The proposed analysis, Normalized Linear Procrustes Analysis (NLPA) extends the existing Procrustes and Local Procrustes algorithms for aligning manifolds. The methods are tested on data reflecting human behaviour patterns from data collected in a smart home environment. Results show high partial output stability for the t-SNE algorithm for the tested input data for which NLPA is able to detect clusters which are individually aligned and compared. The results highlight the importance of understanding output stability before incorporating dimensionality reduction algorithms into further computation, e.g. for transfer learning.</p>  +
<p>Predictive Maintenance (PM) is a proactive maintenance strategy that tries to minimize a system’s downtime by predicting failures before they happen. It uses data from sensors to measure the component’s state of health and make forecasts about its future degradation. However, existing PM methods typically focus on individual measurements. While it is natural to assume that a history of measurements carries more information than a single one. This paper aims at incorporating such information into PM models. In practice, especially in the automotive domain, diagnostic models have low performance, due to a large amount of noise in the data and limited sensing capability. To address this issue, this paper proposes to use a specific type of ensemble learning known as Stacked Ensemble. The idea is to aggregate predictions of multiple models—consisting of Long Short-Term Memory (LSTM) and Convolutional-LSTM—via a meta model, in order to boost performance. Stacked Ensemble model performs well when its base models are as diverse as possible. To this end, each such model is trained using a specific combination of the following three aspects: feature subsets, past dependency horizon, and model architectures. Experimental results demonstrate benefits of the proposed approach on a case study of heavy-duty truck turbochargers.</p>  +
<p>Gait measurement is of interest for both orthopedists and biomechanical engineers. It is useful for analysis of gait disorders and in design of orthotic and prosthetic devices.</p><p>In this chapter an algorithm is presented to suit estimation of one foot angle in the sagital plane, independent on gait conditions. Only one gyro is used during swing and two accelerometers are needed for calibration during stance. Also, the sensor placement at the front foot avoids the need for heel strike for stance transition. Stair walking can therefore be studied. From the estimated swing trajectory three different gait conditions: up stair, horizontal and down stair are classified.</p>  +
<p><strong>Motivation:</strong> Understanding the substrate specificity of HIV-1 protease is important when designing effective HIV-1 protease inhibitors. Furthermore, characterizing and predicting the cleavage profile of HIV-1 protease is essential to generate and test hypotheses of how HIV-1 affects proteins of the human host. Currently available tools for predicting cleavage by HIV-1 protease can be improved.</p><p><strong>Results:</strong> The linear support vector machine with orthogonal encod-ing is shown to be the best predictor for HIV-1 protease cleavage. It is considerably better than current publicly available predictor ser-vices. It is also found that schemes using physicochemical proper-ties do not improve over the standard orthogonal encoding scheme. Some issues with the currently available data are discussed.</p><p><strong>Availability:</strong> The data sets used, which are the most important part, are available at the UCI Machine Learning Repository. The tools used are all standard and easily available. © 2014 The Author.</p>  +
<p>In this paper we present a stereo visual odometry system for mobile robots that is not sensitive to uneven terrain. Two cameras is mounted perpendicular to the ground and height and traveled distance are calculated using normalized cross correlation. A method for evaluating the system is developed, where flower boxes containing representative surfaces are placed in a metal-working lathe. The cameras are mounted on the carriage which can be positioned manually with 0.1 mm accuracy. Images are captured every 10 mm over 700 mm. The tests are performed on eight different surfaces representing real world situations. The resulting error is less than 0.6% of traveled distance on surfaces where the maximum height variation is measured to 96 mm. The variance is measured for eight test runs, total 5.6 m, to 0.040 mm. This accuracy is sufficient for crop-scale agricultural operations.</p>  +
<p>With the introduction of unleaded gasoline, special fuel agents have appeared on the market for lubricating and cleaning the valve seats. These fuel agents often contain alkali metals that have a significant impact on the ion current signal, thus affecting strategies that use the ion current for engine control and diagnosis, e.g., for estimating the location of the pressure peak. This paper introduces a method for making neural network algorithms robust to expected disturbances in the input signal and demonstrates how well this method applies to the case of disturbances to the ion current signal due to fuel additives containing sodium. The performance of the neural estimators is compared to a Gaussian fit algorithm, which they outperform. It is also shown that using a fuel additive significantly improves the estimation of the location of the pressure peak.</p>  +
<p>With the introduction of unleaded gasoline, special fuel agents have appeared on the market for lubricating and cleaning the valve seats. These fuel agents often contain alkali metals that have a significant impact on the ion current signal, thus affecting strategies that use the ion current for engine control and diagnosis, e.g., for estimating the location of the pressure peak. This paper introduces a method for making neural network algorithms robust to expected disturbances in the input signal and demonstrates how well this method applies to the case of disturbances to the ion current signal due to fuel additives containing sodium. The performance of the neural estimators is compared to a Gaussian fit algorithm, which they outperform. It is also shown that using a fuel additive significantly improves the estimation of the location of the pressure peak. © 2001 Society of Automotive Engineers, Inc.</p>  +
<p>The maximum current that an overhead transmission line can continuously carry depends on external weather conditions, most commonly obtained from real-time streaming weather sensors. The accuracy of the sensor data is very important in order to avoid problems such as overheating. Furthermore, faulty sensor readings may cause operators to limit or even stop the energy production from renewable sources in radial networks. This paper presents a method for detecting and replacing sequences of consecutive faulty data originating from streaming weather sensors. The method is based on a combination of (a) a set of constraints obtained from derivatives in consecutive data, and (b) association rules that are automatically generated from historical data. In smart grids, a large amount of historical data from different weather stations are available but rarely used. In this work, we show that mining and analyzing this historical data provides valuable information that can be used for detecting and replacing faulty sensor readings. We compare the result of the proposed method against the exponentially weighted moving average and vector autoregression models. Experiments on data sets with real and synthetic errors demonstrate the good performance of the proposed method for monitoring weather sensors.</p>  +
<p>OCR technology of Latin scripts is well advanced in comparison to other scripts. However, the available results from Latin are not always sufficient to directly adopt them for other scripts such as the Ethiopic script. In this paper, we propose a novel approach that uses structural and syntactic techniques for recognition of Ethiopic characters. We reveal that primitive structures and their spatial relationships form a unique set of patterns for each character. The relationships of primitives are represented by a special tree structure, which is also used to generate a pattern. A knowledge base of the alphabet that stores possibly occurring patterns for each character is built. Recognition is then achieved by matching the generated pattern against each pattern in the knowledge base. Structural features are extracted using direction field tensor. Experimental results are reported, and the recognition system is insensitive to variations on font types, sizes and styles.</p>  +
<p>Robots that use vision for localization need to handle environments which are subject to seasonal and structural change, and operate under changing lighting and weather conditions. We present a framework for lifelong localization and mapping designed to provide robust and metrically accurate online localization in these kinds of changing environments. Our system iterates between offline map building, map summary, and online localization. The offline mapping fuses data from multiple visually varied datasets, thus dealing with changing environments by incorporating new information. Before passing this data to the online localization system, the map is summarized, selecting only the landmarks that are deemed useful for localization. This Summary Map enables online localization that is accurate and robust to the variation of visual information in natural environments while still being computationally efficient.</p><p>We present a number of summary policies for selecting useful features for localization from the multi-session map and explore the tradeoff between localization performance and computational complexity. The system is evaluated on 77 recordings, with a total length of 30 kilometers, collected outdoors over sixteen months. These datasets cover all seasons, various times of day, and changing weather such as sunshine, rain, fog, and snow. We show that it is possible to build consistent maps that span data collected over an entire year, and cover day-to-night transitions. Simple statistics computed on landmark observations are enough to produce a Summary Map that enables robust and accurate localization over a wide range of seasonal, lighting, and weather conditions. © 2015 Wiley Periodicals, Inc.</p>  +
<p>Biometric research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severly affecting the accuracy of recognition systems if not tackled appropriately. In this chapter, we give an overview of recent research in super-resolution reconstruction applied to biometrics, with a focus on face and iris images in the visible spectrum, two prevalent modalities in selfie biometrics. After an introduction to the generic topic of super-resolution, we investigate methods adapted to cater for the particularities of these two modalities. By experiments, we show the benefits of incorporating super-resolution to improve the quality of biometric images prior to recognition. © Springer Nature AG 2019</p>  +
<p>A study of the dimensionality of the Face Authentication problem using Principal Component Analysis (PCA) and a novel dimensionality reduction algorithm that we call Support Vector Features (SVFs) is presented. Starting from a Gabor feature space, we show that PCA and SVFs identify distinct subspaces with comparable authentication and generalisation performance. Experiments using KNN classifiers and Support Vector Machines (SVMs) on these reduced feature spaces show that the dimensionality at which saturation of the authentication performance is achieved heavily depends on the choice of the classifier. In particular, SVMs involve directions in feature space that carry little variance and therefore appear to be vulnerable to excessive PCA-based compression.</p>  +
<p>In the era of big data, it is imperative to assist the human analyst in the endeavor to find solutions to ill-defined problems, i.e. to “detect the expected and discover the unexpected” (Yi et al., 2008). To their aid, a plethora of analysis support systems is available to the analysts. However, these support systems often lack visual and interactive features, leaving the analysts with no opportunity to guide, influence and even understand the automatic reasoning performed and the data used. Yet, to be able to appropriately support the analysts in their sense-making process, we must look at this process more closely. In this paper, we present the results from interviews performed together with data analysts from the automotive industry where we have investigated how they handle the data, analyze it and make decisions based on the data, outlining directions for the development of analytical support systems within the area. © Springer International Publishing Switzerland 2016.</p>  +
<p>Symbolization of time-series has successfully been used to extract temporal patterns from experimental data. Segmentation is an unavoidable step of the symbolization process, and it may be characterized on two domains: the amplitude and the temporal domain. These two groups of methods present advantages and disadvantages each. Can their performance be estimated a priori based on signal characteristics? This paper evaluates the performance of SAX, Persist and ACA on 47 different time-series, based on signal periodicity. Results show that SAX tends to perform best on random signals whereas ACA may outperform the other methods on highly periodic signals. However, results do not support that a most adequate method may be determined a priory.</p>  +
<p>Common image features have too poor information for identification of forensic images of fingerprints, where only a small area of the finger is imaged and hence a small amount of key points are available. Noise, nonlinear deformation, and unknown rotation are additional issues that complicate identification of forensic fingerprints. We propose a feature extraction method which describes image information around key points: Symmetry Assessment by Finite Expansion (SAFE). The feature set has built-in quality estimates as well as a rotation invariance property. The theory is developed for continuous space, allowing compensation for features directly in the feature space when images undergo such rotation without actually rotating them. Experiments supporting that use of these features improves identification of forensic fingerprint images of the public NIST SD27 database are presented. Performance of matching orientation information in a neighborhood of core points has an EER of 24% with these features alone, without using minutiae constellations, in contrast to 36% when using minutiae alone. Rank-20 CMC is 58%, which is lower than 67% when using notably more manually collected minutiae information.</p>  +
<p>Common image features have too poor information for identification of forensic images of fingerprints, where only a small area of the finger is imaged and hence a small amount of key points are available. Noise, nonlinear deformation, and unknown rotation are additional issues that complicate identification of forensic fingerprints. We propose a feature extraction method which describes image information around key points: Symmetry Assessment by Finite Expansion (SAFE). The feature set has built-in quality estimates as well as a rotation invariance property. The theory is developed for continuous space, allowing compensation for features directly in the feature space when images undergo such rotation without actually rotating them. Experiments supporting that use of these features improves identification of forensic fingerprint images of the public NIST SD27 database are presented. Performance of matching orientation information in a neighborhood of core points has an EER of 24% with these features alone, without using minutiae constellations, in contrast to 36% when using minutiae alone. Rank-20 CMC is 58%, which is lower than 67% when using notably more manually collected minutiae information.</p>  +
<p>A common framework for feature extraction in fingerprints is proposed by use of certain symmetries. The proposal includes representation, filters, and filtering techniques for common features including minutiae points, singular points and the ridge and valley patterns.</p><p>The filters are complex and are designed to identify certain symmetries called rotational symmetries and they are applied to the squared complex gradient field of an image. The filters are used as extractors for known fingerprint features. The filter response magnitude is a certainty measure for existence of a symmetry and its argument is the spatial orientation of that symmetry. This means that the position and the spatial orientation of the fingerprint feature are estimated in a single filtering step jointly. In the proposed framework the position and orientation of singular points are extracted using a multi-scale filtering technique. This strategy is taken to increase the signal-to-noise ratio in the extraction and can be done because singular points have a large spatial support from the orientation field. Experiments show that position is extracted by a precision of 5 ± 3 pixels1 and the orientation by a precision of 0 ± 4° with an EER of approximately 4%. The estimated position and orientation of singular points are used in an alignment experiment which yielded an unbiased alignment error with a standard deviation of 13 pixels 1.</p><p>A one modality multi-expert registration experiment is presented using singular points and orientation images to estimate the registration parameters.</p><p>1A fingerprint wavelength is in average 10 pixels.</p>  +