Property:Abstract

From ISLAB/CAISR
Jump to navigationJump to search

This is a property of type Text.

Showing 20 pages using this property.
L
<p>The present chapter reports on the use of lip motion as a stand alone biometric modality as well as a modality integrated with audio speech for identity recognition using digit recognition as a support. First, the auhtors estimate motion vectors from images of lip movements. The motion is modeled as the distribution of apparent line velocities in the movement of brightness patterns in an image. Then, they construct compact lip-motion features from the regional statistics of the local velocities. These can be used as alone or merged with audio features to recognize identity or the uttered digit. The author’s present person recognition results using the XM2VTS database representing the video and audio data of 295 people. Furthermore, we present results on digit recognition when it is used in a text prompted mode to verify the liveness of the user. Such user challenges have the intention to reduce replay attack risks of the audio system.</p>  +
<p>We propose an algorithm for detecting the mouth events of opening and closing. Our method is translation and ro- tation invariant, works at very fast speeds, and does not re- quire segmented lips. The approach is based on a recently developed optical flow algorithm that handles the motion of linear structure in a stable and consistent way.Furthermore, we provide a semi-automatic tool for gen- erating groundtruth segmentation of video data, also based on the optical flow algorithm used for tracking keypoints at faster than 200 frames/second. We provide groundtruth for 50 sessions of speech of the XM2VTS database (16) avail- able for download, and the means to segment further ses- sions at a relatively small amount of user interaction.We use the generated groundtruth to test the proposed al- gorithm for detecting events, and show it to yield promising result. The semi-automatic tool will be a useful resource for researchers in need of groundtruth segmentation from video for the XM2VTS database and others.</p>  +
<p>Accurate fingerprint recognition presupposes robust feature extraction which is often hampered by noisy input data. We suggest common techniques for both enhancement and minutiae extraction, employing symmetry features. For enhancement, a Laplacian-like image pyramid is used to decompose the original fingerprint into sub-bands corresponding to different spatial scales. In a further step, contextual smoothing is performed on these pyramid levels, where the corresponding filtering directions stem from the frequency-adapted structure tensor (linear symmetry features). For minutiae extraction, parabolic symmetry is added to the local fingerprint model which allows to accurately detect the position and direction of a minutia simultaneously. Our experiments support the view that using the suggested parabolic symmetry features, the extraction of which does not require explicit thinning or other morphological operations, constitute a robust alternative to conventional minutiae extraction. All necessary image processing is done in the spatial domain using 1-D filters only, avoiding block artifacts that reduce the biometric information. We present comparisons to other studies on enhancement in matching tasks employing the open source matcher from NIST, FIS2. Furthermore, we compare the proposed minutiae extraction method with the corresponding method from the NIST package, mindtct. A top five commercial matcher from FVC2006 is used in enhancement quantification as well. The matching error is lowered significantly when plugging in the suggested methods. The FVC2004 fingerprint database, notable for its exceptionally low-quality fingerprints, is used for all experiments.</p>  +
<p>Recently, functional gradient algorithms like CHOMP have been very successful in producing locally optimal motion plans for articulated robots. In this paper, we have adapted CHOMP to work with a non-holonomic vehicle such as an autonomous truck with a single trailer and a differential drive robot. An extended CHOMP with rolling constraints have been implemented on both of these setup which yielded feasible curvatures. This paper details the experimental integration of the extended CHOMP motion planner with the sensor fusion and control system of an autonomous Volvo FH-16 truck. It also explains the experiments conducted on the differential-drive robot. Initial experimental investigations and results conducted in a real-world environment show that CHOMP can produce smooth and collision-free trajectories for mobile robots and vehicles as well. In conclusion, this paper discusses the feasibility of employing CHOMP to mobile robots.</p>  +
<p>A set of local feature descriptors for fingerprints is proposed. Minutia points are detected in a novel way by complex filtering of the structure tensor, not only revealing their position but also their direction. Parabolic and linear symmetry descriptions are used to model and extract local features including ridge orientation and reliability, which can be reused in several stages of fingerprint processing. Experimental results on the proposed technique are presented.</p>  +
<p>For the alignment of two fingerprints certain landmark points are needed. These should be automaticaly extracted with low misidentification rate. As landmarks we suggest the prominent symmetry points (singular points, SPs) in the fingerprints. We identify an SP by its symmetry properties. SPs are extracted from the complex orientation field estimated from the global structure of the fingerprint, i.e. the overall pattern of the ridges and valleys. Complex filters, applied to the orientation field in multiple resolution scales, are used to detect the symmetry and the type of symmetry. Experimental results are reported.</p>  +
<p>The proliferation of cameras and personal devices results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop when images from heterogeneous environments are compared. However, many applications require to deal with data from different sources regularly, thus needing to overcome these interoperability problems. Here, we employ fusion of several comparators to improve periocular performance when images from different smartphones are compared. We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of up to 40% due to the fusion. Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and crosssensor score distributions are aligned and mapped to a common probabilistic domain. This allows the use of Bayes thresholds for optimal decision making, eliminating the need of sensor-specific thresholds, which is essential in operational conditions because the threshold setting critically determines the accuracy of the authentication process in many applications. © EURASIP 2017</p>  +
M
<p>This paper presents an overview of an autonomous robotic material handling system. The goal of the system is to extend the functionalities of traditional AGVs to operate in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires that the system be able to detect and track object positions at runtime. Another requirement of the system is to be able to generate trajectories dynamically, which is uncommon in industrial AGV systems.</p>  +
<p>A consequence of the fragmented and siloed healthcare landscape is that patient care (and data) is split along multitude of different facilities and computer systems and enabling interoperability between these systems is hard. The lack interoperability not only hinders continuity of care and burdens providers, but also hinders effective application of Machine Learning (ML) algorithms. Thus, most current ML algorithms, designed to understand patient care and facilitate clinical decision-support, are trained on limited datasets. This approach is analogous to the Newtonian paradigm of Reductionism in which a system is broken down into elementary components and a description of the whole is formed by understanding those components individually. A key limitation of the reductionist approach is that it ignores the component-component interactions and dynamics within the system which are often of prime significance in understanding the overall behaviour of complex adaptive systems (CAS). Healthcare is a CAS.</p><p>Though the application of ML on health data have shown incremental improvements for clinical decision support, ML has a much a broader potential to restructure care delivery as a whole and maximize care value. However, this ML potential remains largely untapped: primarily due to functional limitations of Electronic Health Records (EHR) and the inability to see the healthcare system as a whole. This viewpoint (i) articulates the healthcare as a complex system which has a biological and an organizational perspective, (ii) motivates with examples, the need of a system's approach when addressing healthcare challenges via ML and, (iii) emphasizes to unleash EHR functionality - while duly respecting all ethical and legal concerns - to reap full benefits of ML.</p>  +
<p>A method and apparatus for data encoding and optical recognition of encoded data includes generating symbols that represent data using angles (rather than linear dimensions as used for conventional bar codes). One embodiment uses spirals isocurves. Another uses parabolas isocurves. Methods for visually depicting such symbols, including symbols having gray values are disclosed. Methods for machine-based detection and decoding of such symbols regardless of their orientation relative to the machine is also disclosed.</p>  +
<p>Fuel consumption is a major economical component of vehicles, particularly for heavy-duty vehicles. It is dependent on many factors, such as driver and environment, and control over some factors is present, e.g. route, and we can try to optimize others, e.g. driver. The driver is responsible for around 30% of the operational cost for the fleet operator and is therefore important to have efficient drivers as they also inuence fuel consumption which is another major cost, amounting to around 40% of vehicle operation. The difference between good and bad drivers can be substantial, depending on the environment, experience and other factors.</p><p>In this thesis, two methods are proposed that aim at quantifying and qualifying driver performance of heavy duty vehicles with respect to fuel consumption. The first method, Fuel under Predefined Conditions (FPC), makes use of domain knowledge in order to incorporate effect of factors which are not measured. Due to the complexity of the vehicles, many factors cannot be quantified precisely or even measured, e.g. wind speed and direction, tire pressure. For FPC to be feasible, several assumptions need to be made regarding unmeasured variables. The effect of said unmeasured variables has to be quantified, which is done by defining specific conditions that enable their estimation. Having calculated the effect of unmeasured variables, the contribution of measured variables can be estimated. All the steps are required to be able to calculate the influence of the driver. The second method, Accelerator Pedal Position - Engine Speed (APPES) seeks to qualify driver performance irrespective of the external factors by analyzing driver intention. APPES is a 2D histogram build from the two mentioned signals. Driver performance is expressed, in this case, using features calculated from APPES.</p><p>The focus of first method is to quantify fuel consumption, giving us the possibility to estimate driver performance. The second method is more skewed towards qualitative analysis allowing a better understanding of driver decisions and how they affect fuel consumption. Both methods have the ability to give transferable knowledge that can be used to improve driver's performance or automatic driving systems.</p><p>Throughout the thesis and attached articles we show that both methods are able to operate within the specified conditions and achieve the set goal.</p>  
<p>A minimax approach for multi-objective controller design is proposed, in which structured uncertainty is characterized by multiple discrete-time SISO models. Typical engineering objectives are optimized for all models, such as bounds on different sensitivity functions and time-domain responses. The approach is illustrated by improving the best performing controller of a flexible arm benchmark example.</p>  +
<p>We present a method for calculating the minimum EDF-feasible deadline. The algorithm targets periodic tasks with hard real-time guarantees, that are to be feasibly scheduled with EDF (Earliest Deadline First). The output is the smallest possible deadline required for feasibility, of the task most recently requested. The good thing with our algorithm is that it has the same timecomplexity as the regular EDF feasibility test, when deadlines are not assumed to be equal to the periods of the periodic tasks.</p>  +
<p>Random forests (RF) has become a popular technique for classification, prediction, studying variable importance, variable selection, and outlier detection. There are numerous application examples of RF in a variety of fields. Several large scale comparisons including RF have been performed. There are numerous articles, where variable importance evaluations based on the variable importance measures available from RF are used for data exploration and understanding. Apart from the literature survey in RF area, this paper also presents results of new tests regarding variable rankings based on RF variable importance measures. We studied experimentally the consistency and generality of such rankings. Results of the studies indicate that there is no evidence supporting the belief in generality of such rankings. A high variance of variable importance evaluations was observed in the case of small number of trees and small data sets.</p>  +
<p>We present two applications of path planning oncostmaps: (i) the Probabilistic Navigation Function uses a smoothlyvarying co-occurrence estimation to trade-off collision risk versusdetour lengths, and (ii) a navigation system for exploration ofunknown environment using growable costmaps, interweaving mapping,replanning, and control.By relying on costmaps as a general basis for planning and pathtracking as a generic motion control interface, our approach andimplementation covers a wide range of planners and controllers. Weachieve a relatively general purpose system and introduce a limitedamount of well-defined user-definable heuristics that allow users toadapt the system to a given application. System integration andgenericity is demonstrated by providing three specific implementationsof the planner and controller components, all working withinthe same framework.</p>  +
<p>Most existing work in information fusion focuses on combining information with well-defined meaning towards a concrete, pre-specified goal. In contradistinction, we instead aim for autonomous discovery of high-level knowledge from ubiquitous data streams. This paper introduces a method for recognition and tracking of hidden conceptual modes, which are essential to fully understand the operation of complex environments. We consider a scenario of analyzing usage of a fleet of city buses, where the objective is to automatically discover and track modes such as highway route, heavy traffic, or aggressive driver, based on available on-board signals. The method we propose is based on aggregating the data over time, since the high-level modes are only apparent in the longer perspective. We search through different features and subsets of the data, and identify those that lead to good clusterings, interpreting those clusters as initial, rough models of the prospective modes. We utilize Bayesian tracking in order to continuously improve the parameters of those models, based on the new data, while at the same time following how the modes evolve over time. Experiments with artificial data of varying degrees of complexity, as well as on real-world datasets, prove the effectiveness of the proposed method in accurately discovering the modes and in identifying which one best explains the current observations from multiple data streams. © 2017 Elsevier B.V. All rights reserved.</p>  +
<p>Designing novel cyber-physical systems entails significant, costly physical experimentation. Simulation tools can enable the virtualization of experiments. Unfortunately, current tools have shortcomings that limit their utility for virtual experimentation. Language research can be especially helpful in addressing many of these problems. As a first step in this direction, we consider the question of determining what language features are needed to model cyber-physical systems. Using a series of elementary examples of cyber-physical systems, we reflect on the extent to which a small, experimental domain-specific formalism called Acumen suffices for this purpose.</p>  +
<p>We consider the question of what language features are needed to effectively model cyber-physical systems (CPS). In previous work, we proposed a core language called Acumen as a way to study this question, and showed how several basic aspects of CPS can be modeled clearly in a language with a small set of constructs. This paper reports on the result of our analysis of two more complex case studies from the domain of rigid body dynamics. The first one, a quadcopter, illustrates that Acumen can support larger, more interesting systems than previously shown. The second one, a serial robot, provides a concrete example of why explicit support for static partial derivatives can significantly improve the expressivity of a CPS modeling language.</p>  +
<p>We continue to consider the question of what language features are needed to effectively model cyber-physical systems (CPS). In previous work, we proposed using a core language as a way to study this question, and showed how several basic aspects of CPS can be modeled clearly in a language with a small set of constructs. This paper reports on the result of our analysis of two, more complex, case studies from the domain of rigid body dynamics. The first one, a quadcopter, illustrates that previously proposed core language can support larger, more interesting systems than previously shown. The second one, a serial robot, provides a concrete example of why we should add language support for static partial derivatives, namely that it would significantly improve the way models of rigid body dynamics can be expressed. © 2014 IEEE.</p>  +
<p>Model-based tools have the potential to significantly improve the process of developing novel cyber-physical systems (CPS). In this paper, we consider the question of what language features are needed to model such systems. We use a small, experimental hybrid systems modeling language to show how a number of basic and pervasive aspects of cyber-physical systems can be modeled concisely using the small set of language constructs. We then consider four, more complex, case studies from the domain of robotics. The first, a quadcopter, illustrates that these constructs can support the modeling of interesting systems. The second, a serial robot, provides a concrete example of why it is important to support static partial derivatives, namely, that it significantly improves the way models of rigid body dynamics can be expressed. The third, a linear solenoid actuator, illustrates the language’s ability to integrate multiphysics subsystems. The fourth and final, a compass gait biped, shows how a hybrid system with non-trivial dynamics is modeled. Through this analysis, the work establishes a strong connection between the engineering needs of the CPS domain and the language features that can address these needs. The study builds the case for why modeling languages can be improved by integrating several features, most notably, partial derivatives, differentiation without duplication, and support for equations. These features do not appear to be addressed in a satisfactory manner in mainstream modeling and simulation tools.</p>  +