Property:Abstract
From ISLAB/CAISR
Jump to navigationJump to searchThis is a property of type Text.
I
<p>We study agents situated in partially observable environments, who do not have the resources to create conformant plans. Instead, they create conditional plans which are partial, and learn from experience to choose the best of them for execution. Our agent employs an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge in order to choose the best plan for execution. We show results of using PROGOL learning algorithm to distinguish "bad" plans, and we present three modifications which make the algorithm fit this class of problems better. Specifically, we limit the search space by fixing semantics of conditional branches within plans, we guide the search by specifying relative relevance of portions of knowledge base, and we integrate learning algorithm into the agent architecture by allowing it to directly access the agent's knowledge encoded in Active Logic. We report on experiments which show that those extensions lead to significantly better learning results.</p> +
<p>In this paper, we present a design of a surveying system for warehouse environment using low cost quadcopter. The system focus on mapping the infrastructure of surveyed environment. As a unique and essential parts of the warehouse, pillars from storing shelves are chosen as landmark objects for representing the environment. The map are generated based on fusing the outputs of two different methods, point cloud of corner features from Parallel Tracking and Mapping (PTAM) algorithm with estimated pillar position from a multi-stage image analysis method. Localization of the drone relies on PTAM algorithm. The system is implemented in Robot Operating System(ROS) and MATLAB, and has been successfully tested in real-world experiments. The result map after scaling has a metric error less than 20 cm. © Springer International Publishing Switzerland 2016.</p> +
<p>Automatic and robust ink feed control in a web- fed offset printing press is the objective of this work. To achieve this goal an integrating controller and a multiple neural models-based controller are combined. The neural networks-based printing process models are built and updated automatically without any interaction from the user. The multiple models-based controller is superior to the integrating controller as the process is running in the training region of the models. However, the multiple models-based controller may run into generalisation prob- lems if the process starts operating in a new part of the input space. Such situations are automatically detected and the integrating controller temporary takes over the process control. The developed control configuration has success- fully been used to automatically control the ink feed in the web-fed offset printing press according to the target amount of ink. Use of the developed tools led to higher print quality and lower ink and paper waste.</p> +
<p>A multiple model-based controller has been developed aiming at controlling the ink flow in the offset lithographic printing process. The control system consists of a model pool of four couples of inverse and direct models. Each couple evaluates a number of probable control signals and the couple, generating the most suitable control signal is used to control the printing press, at that moment. The developed system has been tested at a newspaper printing shop during normal production. The results show that the developed modelling and control system is able to drive the output of the printing press to the desired target levels.</p> +
Integrating global and local analysis of color, texture and geometrical information for categorizing laryngeal images +
<p>An approach to integrating the global and local kernel-based automated analysis of vocal fold images aiming to categorize laryngeal diseases is presented in this paper. The problem is treated as an image analysis and recognition task. A committee of support vector machines is employed for performing the categorization of vocal fold images into healthy, diffuse and nodular classes. Analysis of image color distribution, Gabor filtering, cooccurrence matrices, analysis of color edges, image segmentation into homogeneous regions from the image color, texture and geometry view point, analysis of the soft membership of the regions in the decision classes, the kernel principal components based feature extraction are the techniques employed for the global and local analysis of laryngeal images. Bearing in mind the high similarity of the decision classes, the correct classification rate of over 94% obtained when testing the system on 785 vocal fold images is rather encouraging.</p> +
<p>In this paper, we present a neural networks and image analysis based approach to assessing colour deviations in an offset printing process from direct measurements on halftone multicoloured pictures--there are no measuring areas printed solely to assess the deviations. A committee of neural networks is trained to assess the ink proportions in a small image area. From only one measurement the trained committee is capable of estimating the actual amount of printing inks dispersed on paper in the measuring area. To match the measured image area of the printed picture with the corresponding area of the original image, when comparing the actual ink proportions with the targeted ones, properties of the 2-D Fourier transform are exploited.</p> +
<p>This paper presents a method and a system to identify the number of magnetic correction shunts and their positions for deflection yoke tuning to correct the misconvergence of colors of a cathode ray tube. The method proposed consists of two phases, namely, learning and optimization. In the learning phase, the radial basis function neural network is trained to learn a mapping: correction shunt position --> changes in misconvergence. In the optimization phase, the trained neural network is used to predict changes in misconvergence depending on a correction shunt position. An optimization procedure based on the predictions returned by the neural net is then executed in order to find the minimal number of correction shunts needed and their positions. During the experimental investigations, 98% of the deflection yokes analyzed have been tuned successfully using the technique proposed.</p> +
<p>Colour, shape, geometry, contrast, irregularity and roughness of the visual appearance of vocal cords are the main visual features used by a physician to diagnose laryngeal diseases. This type of examination is rather subjective and to a great extent depends on physician’s experience. A decision support system for automated analysis of vocal cord images, created exploiting numerous vocal cord images can be a valuable tool enabling increased reliability of the analysis, and decreased intra- and inter-observer variability. This paper is concerned with such a system for analysis of vocal cord images. Colour, texture, and geometrical features are used to extract relevant information. A committee of artificial neural networks is then employed for performing the categorization of vocal cord images into healthy, diffuse, and nodular classes. A correct classification rate of over 93% was obtained when testing the system on 785 vocal cord images.</p> +
<p>Autonomous robotic systems operating in the vicinity of other agents, such as humans, manually driven vehicles and other robots, can model the behaviour and estimate intentions of the other agents to enhance efficiency of their operation, while preserving safety. We propose a data-driven approach to model the behaviour of other agents, which is based on a set of trajectories navigated by other agents. Then, to evaluate the proposed behaviour modelling approach, we propose and compare two methods for agent intention estimation based on: (i) particle filtering; and (ii) decision trees. The proposed methods were validated using three datasets that consist of real-world bicycle and car trajectories in two different scenarios, at a roundabout and at a t-junction with a pedestrian crossing. The results validate the utility of the data-driven behaviour model, and show that decision-tree based intention estimation works better on a binary-class problem, whereas the particle-filter based technique performs better on a multi-class problem, such as the roundabout, where the method yielded an average gain of 14.88 m for correct intention estimation locations compared to the decision-tree based method. © 2018 by the authors</p> +
<p>In this survey, 105 papers related to interactive clustering were reviewed according to seven perspectives: (1) on what level is the interaction happening, (2) which interactive operations are involved, (3) how user feedback is incorporated, (4) how interactive clustering is evaluated, (5) which data and (6) which clustering methods have been used, and (7) what outlined challenges there are. This article serves as a comprehensive overview of the field and outlines the state of the art within the area as well as identifies challenges and future research needs. © 2020 Copyright held by the owner/author(s).</p> +
<p>In this survey, 105 papers related to interactive clustering were reviewed according to seven perspectives:(1) on what level is the interaction happening, (2) which interactive operations are involved, (3) how user feedback is incorporated, (4) how interactive clustering is evaluated, (5) which data and (6) which clustering methods have been used, and (7) what outlined challenges there are. This article serves as a comprehensive overview of the field and outlines the state of the art within the area as well as identifies challenges and future research needs.</p> +
Interactive clustering for exploring multiple data streams at different time scales and granularity +
<p>We approach the problem of identifying and interpreting clusters over different time scales and granularity in multivariate time series data. We extract statistical features over a sliding window of each time series, and then use a Gaussian mixture model to identify clusters which are then projected back on the data streams. The human analyst can then further analyze this projection and adjust the size of the sliding window and the number of clusters in order to capture the different types of clusters over different time scales. We demonstrate the effectiveness of our approach in two different application scenarios: (1) fleet management and (2) district heating, wherein each scenario, several different types of meaningful clusters can be identified when varying over these dimensions. © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.</p> +
Interactive feature extraction for diagnostic trouble codes in predictive maintenance : A case study from automotive domain +
<p>Predicting future maintenance needs of equipment can be addressed in a variety of ways. Methods based on machine learning approaches provide an interesting platform for mining large data sets to find patterns that might correlate with a given fault. In this paper, we approach predictive maintenance as a classification problem and use Random Forest to separate data readouts within a particular time window into those corresponding to faulty and non-faulty component categories. We utilize diagnostic trouble codes (DTCs) as an example of event-based data, and propose four categories of features that can be derived from DTCs as a predictive maintenance framework. We test the approach using large-scale data from a fleet of heavy duty trucks, and show that DTCs can be used within our framework as indicators of imminent failures in different components.</p> +
<p>Diagnosing deviations and predicting faults is an important task, especially given recent advances related to Internet of Things. However, the majority of the efforts for diagnostics are still carried out by human experts in a time-consuming and expensive manner. One promising approach towards self-monitoring systems is based on the "wisdom of the crowd" idea, where malfunctioning equipments are detected by understanding the similarities and differences in the operation of several alike systems.</p><p>A fully autonomous fault detection, however, is not possible, since not all deviations or anomalies correspond to faulty behaviors; many can be explained by atypical usage or varying external conditions. In this work, we propose a method which gradually incorporates expert-provided feedback for more accurate self-monitoring. Our idea is to support model adaptation while allowing human feedback to persist over changes in data distribution, such as concept drift. © 2019 Association for Computing Machinery.</p> +
<p>Diagnosing deviations and predicting faults is an important task, especially given recent advances related to Internet of Things. However, the majority of the efforts for diagnostics are still carried out by human experts in a time-consuming and expensive manner. One promising approach towards self-monitoring systems is based on the "wisdom of the crowd" idea, where malfunctioning equipments are detected by understanding the similarities and differences in the operation of several alike systems.</p><p>A fully autonomous fault detection, however, is not possible, since not all deviations or anomalies correspond to faulty behaviors; many can be explained by atypical usage or varying external conditions. In this work, we propose a method which gradually incorporates expert-provided feedback for more accurate self-monitoring. Our idea is to support model adaptation while allowing human feedback to persist over changes in data distribution, such as concept drift.</p> +
<p>Mobile robots are increasingly being used in automation solutions with notable examples in service robots, such as home-care, and warehouses. Autonomy of mobile robots is particularly challenging, since their work space is not deterministic, known a priori, or fully predictable. Accordingly, the ability to model the work space, that is robotic mapping, is among the core technologies that are the backbone of autonomous mobile robots. However, for some applications the abilities of mapping and localization do not meet all the requirements, and robots with an enhanced awareness of their surroundings are desired. For instance, a map augmented with semantic labels is instrumental to support Human-Robot Interaction and high-level task planning and reasoning.This thesis addresses this requirement through an interpretation and integration of multiple input maps into a semantically annotated heterogeneous representation. The heterogeneity of the representation should to contain different interpretations of an input map, establish and maintain associations among different input sources, and construct a hierarchy of abstraction through model-based representation. The structuring and construction of this representation are at the core of this thesis, and the main objectives are: a) modeling, interpretation, semantic annotation, and association of the different data sources into a heterogeneous representation, and b) improving the autonomy of the aforementioned processes by curtailing the dependency of the methods on human input, such as domain knowledge.This work proposes map interpretation techniques, such as abstract representation through modeling and semantic annotation, in an attempt to enrich the final representation. In order to associate multiple data sources, this work also proposes a map alignment method. The contributions and general observations that result from the studies included in this work could be summarized as: i) manner of structuring the heterogeneous representation, ii) underlining the advantages of modeling and abstract representations, iii) several approaches to semantic annotation, and iv) improved extensibility of methods by lessening their dependency on human input.The scope of the work has been focused on 2D maps of well-structured indoor environments, such as warehouses, home, and office buildings.</p>
Investigation into reducing anthropomorphic hand degrees of freedom while maintaining human hand grasping functions +
<p>Underactuation is widely used when designing anthropomorphic hand, which involves fewer degrees of actuation than degrees of freedom. However, the similarities between coordinated joint movements and movement variances across different grasp tasks have not been suitably examined. This work suggests a systematic approach to identify the actuation strategy with the minimum number for degrees of actuation for anthropomorphic hands. This work evaluates the correlations of coordinated movements in human hands during 23 grasp tasks to suggest actuation strategies for anthropomorphic hands. Our approach proceeds as follows: first, we find the best description for each coordinated joint movement in each grasp task by using multiple linear regression; then, based on the similarities between joint movements, we classify hand joints into groups by using hierarchical cluster analysis; finally, we reduce the dimensionality of each group of joints by employing principal components analysis. The metacarpophalangeal joints and proximal interphalangeal joints have the best and most consistent description of their coordinated movements across all grasp tasks. The thumb metacarpophalangeal and abduction/adduction between the ring and little fingers exhibit relatively high independence of movement. The distal interphalangeal joints show a high degree of independent movement but not for all grasp tasks. Analysis of the results indicates that for the distal interphalangeal joints, their coordinated movements are better explained when all fingers wrap around the object. Our approach fails to provide more information for the other joints. We conclude that 19 degrees of freedom for an anthropomorphic hand can be reduced to 13 degrees of actuation distributed between six groups of joints. The number of degrees of actuation can be further reduced to six by relaxing the dimensionality reduction criteria. Other resolutions are as follows: (a) the joint coupling scheme should be joint-based rather than finger-based and (b) hand designs may need to include finger abduction/adduction movements.</p>
<p>A model based soft sensor that estimates the location of the in-cylinder pressure peak from the ion current is described. The soft sensor uses a neural network algorithm and has been implemented in a SAAB 9000 low-pressure turbo production car. It estimates the pressure peak location, in real time, during normal highway driving with an error of 2-3 crank angle degrees. The soft sensor has been tested during normal Scandinavian weather conditions, with a relative air humidity of about 50%, as well as when water is sprayed into the intake manifold, resulting in approximately 100% relative humidity. The neural network based soft sensor is significantly better than that of another method, based on nonlinear Gaussian curve fits, for the same task.</p> +
Iris Boundaries Segmentation Using the Generalized Structure Tensor : A Study on the Effects of Image Degradation +
<p>We present a new iris segmentation algorithm based onthe Generalized Structure Tensor (GST), which also includesan eyelid detection step. It is compared with traditionalsegmentation systems based on Hough transformand integro-differential operators. Results are given usingthe CASIA-IrisV3-Interval database. Segmentation performanceunder different degrees of image defocus and motionblur is also evaluated. Reported results shows the effectivenessof the proposed algorithm, with similar performancethan the others in pupil detection, and clearly betterperformance for sclera detection for all levels of degradation.Verification results using 1D Log-Gabor wavelets arealso given, showing the benefits of the eyelids removal step.These results point out the validity of the GST as an alternativeto other iris segmentation systems. © 2012 IEEE.</p> +
<p>This paper present a pupil detection/segmentation algorithm for iris images based on Structure Tensor analysis. Eigenvalues of the structure tensor matrix have been observed to be high in pupil boundaries and specular reflections of iris images. We exploit this fact to detect the specular reflections region and the boundary of the pupil in a sequential manner. Experimental results are given using the CASIA-IrisV3-Interval database (249 contributors, 396 different eyes, 2,639 iris images). Results show that our algorithm works specially well in detecting the specular reflections (98.98% success rate) and pupil boundary detection is correctly done in 84.24% of the images.</p> +