Property:Abstract
From ISLAB/CAISR
Jump to navigationJump to searchThis is a property of type Text.
2
<p>In many applications of autonomous mobile robots the following problem is encountered. Two maps of the same environment are available, one a prior map and the other a sensor map built by the robot. To benefit from all available information in both maps, the robot must find the correct alignment between the two maps. There exist many approaches to address this challenge, however, most of the previous methods rely on assumptions such as similar modalities of the maps, same scale, or existence of an initial guess for the alignment. In this work we propose a decomposition-based method for 2D spatial map alignment which does not rely on those assumptions. Our proposed method is validated and compared with other approaches, including generic data association approaches and map alignment algorithms. Real world examples of four different environments with thirty six sensor maps and four layout maps are used for this analysis. The maps, along with an implementation of the method, are made publicly available online.</p> +
3
<p>Human-operated and driverless trucks often collaborate in a mixed work space in industries and warehouses. This is more efficient and flexible than using only one kind of truck. However, since driverless trucks need to give way to trucks, a reliable detection system is required. Several challenges exist in the development of an obstacle detection system in an industrial setting. The first is to select interesting situations and objects. Overhanging objects are often found in industrial environments, e.g. tines on a forklift. Second is choosing a detection system that has the ability to detect those situations. The traditional laser scanner situated two decimetres above the floor does not detect overhanging objects. Third is to ensure that the perception system is reliable. A solution used on trucks today is to mount a 2D laser scanner on the top of the truck and tilt the scanner towards the floor. However, objects at the top of the truck will be detected too late and a collision cannot always be avoided. Our aim is to replace the upper 2D laser scanner with a 3D camera, structural light or time-of-flight (TOF) camera. It is important to maximize the field of view in the desired detection volume. Hence, the placement of the sensor is important. We conducted laboratory experiments to check and compare the various sensors’ capabilities for different colors, used tines and a model of a tine in a controlled industrial environment. We also conducted field experiments in a warehouse. The conclusion is that both the tested structural light and TOF sensors have problems to detect black items that is nonperpendicular to the sensor and at the distance of interest. It is important to optimize the light economy, meaning the illumination power, field of view and exposure time in order to detect as many different objects as possible. Copyright © 2016 by ASTM International</p> +
A
<p>Home-based healthcare technologies aim to enable older people to age in place as well as to support those delivering care. Although a number of smart homes exist, there is no established method to architect these systems. This work proposes the development of a smart environment as an active database system. Active rules in the database, in conjunction with sensors and actuators, monitor and respond to events taking place in the home environment. Resource adapters integrate heterogeneous hardware and software technologies into the system. A 'Smart Bedroom' has been developed as a demonstrator. The proposed approach represents a flexible and robust architecture for smart homes and ambient assisted living systems. © 2013 IEEE.</p> +
<p>One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system.</p> +
<p>Combustion timing control of SI engines can be improved by feedback of the peak pressure position (PPP). However, pressure sensors are costly, and therefore, nonintrusive and cheap ion-current ’soft sensors’ have been suggested. Three different algorithms have been proposed that extract information about PPP from the ion current signal. In this paper, these approaches are compared with respect to accuracy, operational range, implementation aspects, as well as sensitivity to engine load and inlet air humidity. Copyright © 2001 Society of Automotive Engineers, Inc.</p> +
<p>Traditionally, database management systems (DBMSs) have been employed exclusively for data management in infrastructures supporting Ambient Assisted Living (AAL) systems. However, DBMSs provide other mechanisms, such as for security, dependability, and extensibility that can facilitate the development, use, and maintenance of AAL applications. This work utilizes such mechanisms, particularly extensibility, and proposes a database-centric architecture to support home-based healthcare applications. An active database is used to monitor and respond to events taking place in the home, such as bed-exits. In-database data mining methods are applied to model early night behaviors of people living alone. Encapsulating the processing into the DBMS avoids transferring and processing sensitive data outside of database, enables changes in the logic to be managed on-the-fly, and reduces code duplication. As a result, such an approach leads to better performance and increased security and privacy, and can facilitate the adaptability and scalability of AAL systems. An evaluation of the architecture with datasets collected in real homes demonstrated the feasibility and flexibility of the approach.</p> +
<p>Effective and creative CPS development requires expertise in disparate fields that have traditionally been taught in distinct disciplines. At the same time, students seeking a CPS education generally come from diverse educational backgrounds. In this paper we report on our recent experience developing and teaching a course on CPS. The course can be seen as a detailed proposal focused on three three key questions: What are the core elements of CPS? How can these core concepts be integrated in the CPS design process? What types of modeling tools can assist in the design of cyber-physical systems? Experience from the first two offerings of the course is promising, and we discuss the lessons learned. All materials including lecture notes and software used for the course are openly available online.</p> +
<p>Classical iris biometric systems assume ideal environmental conditions and cooperative users for image acquisition. When conditions are less ideal or users are uncooperative or unaware of their biometrics being taken the image acquisition quality suffers. This makes it harder for iris localization and segmentation algorithms to properly segment the acquired image into iris and non-iris parts. Segmentation is a critical part in iris recognition systems, since errors in this initial stage are propagated to subsequent processing stages. Therefore, the performance of iris segmentation algorithms is paramount to the performance of the overall system. In order to properly evaluate and develop iris segmentation algorithm, especially under difficult conditions like off angle and significant occlusions or bad lighting, it is beneficial to directly assess the segmentation algorithm. Currently, when evaluating the performance of iris segmentation algorithms this is mostly done by utilizing the recognition rate, and consequently the overall performance of the biometric system. In order to streamline the development and assessment of iris segmentation algorithms with the dependence on the whole biometric system we have generated a iris segmentation ground truth database. We will show a method for evaluating iris segmentation performance base on this ground truth database and give examples of how to identify problematic cases in order to further analyse the segmentation algorithms. ©2014 IEEE.</p> +
<p>Applying machine learning methods in scenarios involving smart homes is a complex task. The many possible variations of sensors, feature representations, machine learning algorithms, middle-ware architectures, reasoning/decision schemes, and interactive strategies make research and development tasks non-trivial to solve.In this paper, the use of a portable, flexible and holistic smart home demonstrator is proposed to facilitate iterative development and the acquisition of feedback when testing in regard to the above-mentioned issues. Specifically, the focus in this paper is on scenarios involving anomaly detection and response. First a model for anomaly detection is trained with simulated data representing a priori knowledge pertaining to a person living in an apartment. Then a reasoning mechanism uses the trained model to infer and plan a reaction to deviating activities. Reactions are carried out by a mobile interactive robot to investigate if a detected anomaly constitutes a true emergency. The implemented demonstrator was able to detect and respond properly in 18 of 20 trials featuring normal and deviating activity patterns, suggesting the feasibility of the proposed approach for such scenarios. © IEEE 2015</p> +
<p>In real life, documents contain several font types, styles, and sizes. However, many character recognition systems show good results for specific type of documents and fail to produce satisfactory results for others. Over the past decades, various pattern recognition techniques have been applied with the aim to develop recognition systems insensitive to variations in the characteristics of documents. In this paper, we present a robust recognition system for Ethiopic script using a hybrid of classifiers. The complex structures of Ethiopic characters are structurally and syntactically analyzed, and represented as a pattern of simpler graphical units called primitives. The pattern is used for classification of characters using similarity-based matching and neural network classifier. The classification result is further refined by using template matching. A pair of directional filters is used for creating templates and extracting structural features. The recognition system is tested by real life documents and experimental results are reported.</p> +
A Kernel based multi-resolution time series analysis for screening deficiencies in paper production +
<p>This paper is concerned with a multi-resolution tool for analysis of a time series aiming to detect abnormalities in various frequency regions. The task is treated as a kernel based novelty detection applied to a multi-level time series representation obtained from the discrete wavelet transform. Having a priori knowledge that the abnormalities manifest themselves in several frequency regions, a committee of detectors utilizing data dependent aggregation weights is build by combining outputs of detectors operating in those regions.</p> +
<p>The E* algorithm is a path planning method capable of dynamic replanning and user configurable path cost interpolation, it results in more appropriate paths during gradient descent. The underlying formulation is based on interpreting navigation functions as a sampled continuous crossing-time map that takes into account a risk measure. Replanning means thatchanges in the environment model can be repaired to avoid the expenses of complete planning.This helps compensating for the increased computational effort required for interpolation.</p> +
A Mobile Application for Easy Design and Testing of Algorithms to Monitor Physical Activity in the Workplace +
<p>This paper addresses approaches to Human Activity Recognition (HAR) with the aim of monitoring the physical activity of people in the workplace, by means of a smartphone application exploiting the available on-board accelerometer sensor. In fact, HAR via a smartphone or wearable sensor can provide important information regarding the level of daily physical activity, especially in situations where a sedentary behavior usually occurs, like inmodern workplace environments. Increased sitting time is significantly associated with severe health diseases, and the workplace is an appropriate intervention setting, due to the sedentary behavior typical of modern jobs. Within this paper, the state-of-the-art components of HAR are analyzed, in order to identify and select the most effective signal filtering and windowing solutions for physical activity monitoring. The classifier development process is based upon three phases; a feature extraction phase, a feature selection phase, and a training phase. In the training phase, a publicly available dataset is used to test among different classifier types and learning methods. A user-friendly Android-based smartphone application with low computational requirements has been developed to run field tests, which allows to easily change the classifier under test, and to collect new datasets ready for use with machine learning APIs. The newly created datasets may include additional information, like the smartphone position, its orientation, and the user's physical characteristics. Using the mobile tool, a classifier based on a decision tree is finally set up and enriched with the introduction of some robustness improvements. The developed approach is capable of classifying six activities, and to distinguish between not active (sitting) and active states, with an accuracy near to 99%. The mobile tool, which is going to be further extended and enriched, will allow for rapid and easy benchmarking of new algorithms based on previously generated data, and on future collected datasets. © 2016 Susanna Spinsante et al.</p>
<p>In today’s fast changing electric utilities sector demand response (DR) programs are a relatively inexpensive means of reducing peak demand and providing ancillary services. Advancements in embedded systems and communication technologies are paving the way for more complex DR programs based on transactive control. Such complex systems highlight the importance of modeling and simulation tools for studying and evaluatingthe effects of different control strategies for DR. Considerable efforts have been directed at modeling thermostatically controlled appliances. These models however operate with only one degree of freedom, typically, the thermal mass temperature. This paper proposes a two-degree-of-freedom residential space heating systemcomposed of a thermal storage unit and forced convection system. Simulation results demonstrate that such system is better suited for maintaining thermal comfort and allows greater flexibility for DR programs. The performance of several control strategies are evaluated, as well as the effects of model and weather parameterson thermal comfort and power consumption.</p> +
<p>This article presents a method for colour measurements directly on printed half-tone multicoloured pictures. The article introduces the concept of colour impression. By this concept we mean the CMY or CMYK vector (colour vector), which lives in the three- or four-dimensional space of printing inks. Two factors contribute to values of the vector components, namely, the percentage of the area covered by cyan, magenta, yellow, and black inks (tonal values) and ink densities. The colour vector expresses integrated information about the tonal values and ink densities. Values of the colour vector components increase if tonal values or ink densities rise and vice versa. If, for some primary colour, the ink density and tonal value do not change, the corresponding component of the colour vector remains constant. If some reference values of the colour vector components are set from a preprint, then, after an appropriate calibration, the colour vector directly shows how much the operator needs to raise or lower the cyan, magenta, yellow, and black ink densities in order to correct colours of the picture being measured. The values of the components are obtained by registering the RGB image from the measuring area and then transforming the set of registered RGB values to the triplet or quadruple of CMY or CMYK values, respectively. Algorithms based on artificial neural networks are used for performing the transformation. During the experimental investigations, we have found a good correlation between components of the colour vector and ink densities.</p> +
A Novel Technique to Design an Adaptive Committee of Models Applied to Predicting Company's Future Performance +
<p>This article presents an approach to designing an adaptive, data dependent, committee of models applied to prediction of several financial attributes for assessing company’s future performance. A self-organizing map (SOM) used for data mapping and analysis enables building committees, which are specific (committee size and aggregation weights) for each SOM node. The number of basic models aggregated into a committee and the aggregation weights depend on accuracy of basic models and their ability to generalize in the vicinity of the SOM node. The proposed technique led to a statistically significant increase in prediction accuracy if compared to other types of committees.</p> +
<p>This paper presents a software component, the plan manager, which provides the services needed to build and execute plans in a multirobot context. This plan manager handles fully dynamic plans (insertion and removal of tasks), provides tools for safe concurrent execution and modification of plans, and handles distributed plan supervision without permanent robot-to-robot communication. The proposed concept is illustrated by a scenario which involves the navigation of a rover and an unmanned aerial vehicle in an initially unmapped environment. © SAGE Publications 2009 Los Angeles, London.</p> +
<p>We present a novel face tracking approach where optical flow information is incorporated into a modified version of the Viola-Jones detection algorithm. In the original algorithm, detection is static, as information from previous frames is not considered; in addition, candidate windows have to pass all stages of the classification cascade, otherwise they are discarded as containing no face. In contrast, the proposed tracker preserves information about the number of classification stages passed by each window. Such information is used to build a likelihood map, which represents the probability of having a face located at that position. Tracking capabilities are provided by extrapolating the position of the likelihood map to the next frame by optical flow computation. The proposed algorithm works in real time on a standard laptop. The system is verified on the Boston Head Tracking Database, showing that the proposed algorithm outperforms the standard Viola-Jones detector in terms of detection rate and stability of the output bounding box, as well as including the capability to deal with occlusions. We also evaluate two recently published face detectors based on Convolutional Networks and Deformable Part Models, with our algorithm showing a comparable accuracy at a fraction of the computation time.</p> +
<p>A SOM based model combination strategy, allowing to create adaptive – data dependent – committees, is proposed. Both, models included into a committee and aggregation weights are specific for each input data point analyzed. The possibility to detect outliers is one more characteristic feature of the strategy.</p> +
A SOM-based data mining strategy for adaptive modelling of an offset lithographic printing process +
<p>This paper is concerned with a SOM-based data mining strategy for adaptive modelling of a slowly varying process. The aim is to follow the process in a way that makes a representative up-to-date data set of a reasonable size available at any time. The technique developed allows analysis and filtering of redundant data, detection of the need to update the process models and the core-module of the system itself and creation of process models of adaptive, data-dependent complexity. Experimental investigations performed using data from a slowly varying offset lithographic printing process have shown that the tools developed can follow the process and make the necessary adaptations of the data set and the process models. A low-process modelling error has been obtained by employing data-dependent committees for modelling the process.</p> +