Property:Abstract

From ISLAB/CAISR
Jump to navigationJump to search

This is a property of type Text.

Showing 20 pages using this property.
A
<p>The Person-Centered Care (PCC) paradigm advocates that instead of being the passive target of a medical intervention, patients should play an active part in their care and in the decision-making process, together with clinicians. Although new mobile and wearable technologies have created a new wave of personalized health-related applications, it is still unclear how these technologies can be used in health care institutions in order to support person-centered care. In order to investigate this matter, we undertook a pilot study aimed at determining if and how activity monitoring can support person-centered care routines for patients undergoing total hip replacement surgery. This is a preliminary report describing the methodology, preliminary results, and some practical challenges. We present here an orientation-invariant, accelerometer-based activity monitoring method, especially designed to address the requirements of monitoring in-patients in a real clinical setting. We also present and discuss some practical issues related to complying with hospital requirements and collaborating with hospital staff.</p>  +
<p>Developing Cyber-Physical Systems requires methods and tools to support simulation and verification of hybrid (both continuous and discrete) models. The Acumen modeling and simulation language is an open source testbed for exploring the design space of what rigorous-but-practical next-generation tools can deliver to developers of Cyber-Physical Systems. Like verification tools, a design goal for Acumen is to provide rigorous results. Like simulation tools, it aims to be intuitive, practical, and scalable. However, it is far from evident whether these two goals can be achieved simultaneously.</p><p>This paper explains the primary design goals for Acumen, the core challenges that must be addressed in order to achieve these goals, the "agile research method" taken by the project, the steps taken to realize these goals, the key lessons learned, and the emerging language design.</p>  +
<p>Nowadays most of information processing steps in printing industry are highly automated, except the last one – print quality assessment and control. Usually quality assessment is a manual, tedious, and subjective procedure. This article presents a survey of non numerous developments in the field of computational intelligence-based print quality assessment and control in offset colour printing. Recent achievements in this area and advances in applied computational intelligence, expert and decision support systems lay good foundations for creating practical tools to automate the last step of the printing process.</p>  +
<p>Imaging and image analysis became an important issue in laryngeal diagnostics. Various techniques, such as videostroboscopy, videokymography, digital kymograpgy, or ultrasonography are available and are used in research and clinical practice. This paper reviews recent advances in imaging for laryngeal diagnostics.</p>  +
<p>Emotions play an important role in human communication, interaction, and decision making processes. Therefore, considerable efforts have been made towards the automatic identification of human emotions, in particular electroencephalogram (EEG) signals and Data Mining (DM) techniques have been then used to create models recognizing the affective states of users. However, most previous works have used clinical grade EEG systems with at least 32 electrodes. These systems are expensive and cumbersome, and therefore unsuitable for usage during normal daily activities. Smaller EEG headsets such as the Emotiv are now available and can be used during daily activities. This paper investigates the accuracy and applicability of previous affective recognition methods on data collected with an Emotiv headset while participants used a personal computer to fulfill several tasks. Several features were extracted from four channels only (AF3, AF4, F3 and F4 in accordance with the 10–20 system). Both Support Vector Machine and Naïve Bayes were used for emotion classification. Results demonstrate that such methods can be used to accurately detect emotions using a small EEG headset during a normal daily activity. © 2018, Springer-Verlag GmbH Germany, part of Springer Nature.</p>  +
<p>We propose a new active learning method for classification, which handles label noise without relying on multiple oracles (i.e., crowdsourcing). We propose a strategy that selects (for labeling) instances with a high influence on the learned model. An instance x is said to have a high influence on the model h, if training h on x (with label y = h(x)) would result in a model that greatly disagrees with h on labeling other instances. Then, we propose another strategy that selects (for labeling) instances that are highly influenced by changes in the learned model. An instance x is said to be highly influenced, if training h with a set of instances would result in a committee of models that agree on a common label for x but disagree with h(x). We compare the two strategies and we show, on different publicly available datasets, that selecting instances according to the first strategy while eliminating noisy labels according to the second strategy, greatly improves the accuracy compared to several benchmarking methods, even when a significant amount of instances are mislabeled. © Springer-Verlag Berlin Heidelberg 2017</p>  +
<p>In this paper different algorithms for visual odometry are evaluated for navigating an agricultural weeding robot in outdoor field environment. Today there is an encoder wheel that keeps track of the weeding tools position relative the camera, but the system suffers from wheel slippage and errors caused by the uneven terrain. To overcome these difficulties the aim is to replace the encoders with visual odometry using the plant recognition camera. Four different optical flow algorithms are tested on four different surfaces, indoor carpet, outdoor asphalt, grass and soil. The tests are performed on an experimental platform. The result shows that the errors consist mainly of dropouts caused by overriding maximum speed, and of calibration error due to uneven ground. The number of dropouts can be reduced by limiting the maximum speed and detection of missing frames. The calibration problem can be solved using stereo cameras. This gives a height measurement and the calibration will be given by camera mounting. The algorithm using normalized cross-correlation shows the best result concerning number of dropouts, accuracy and calculation time.</p>  +
<p>An analysis of biobasis function neural networks is presented, which shows that the similarity metric used is a linear function and that bio-basis function neural networks therefore often end up being just linear classifiers in high dimensional spaces. This is a consequence of four things: the linearity of the distance measure, the normalization of the distance measure, the recommended default values of the parameters, and that biological data sets are sparse.</p>  +
<p>This work combines a database-centric architecture, which supports Ambient Intelligence (AmI) for Ambient Assisted Living, with a ROS-based mobile sensing and interaction robot. The role of the active database is to monitor and respond to events in the environment and the robot subscribes to tasks issued by the AmI system. The robot can autonomously perform tasks such as to search for and interact with a person. Consequently, the two systems combine their capabilities and complement the lack of computational, sensing and actuation resources.</p>  +
<p>We study agents situated in partially observable environments, who do not have sufficient resources to create conformant (complete) plans. Instead, they create plans which are conditional and partial, execute or simulate them, and learn from experience to evaluate their quality. Our agents employ an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge so that the agents can choose the best plan for execution.</p><p>We describe an architecture which allows ideas and solutions from several sub-fields of Artificial Intelligence to be joined together in a controlled and manageable way. In our opinion, no situated agent can achieve true rationality without using at least logical reasoning and learning. In practice, it is clear that pure logicis not able to cope with all the requirements put on reasoning, thus more domain-specific solutions, like planners, are also necessary. Finally, any realistic agentneeds a reactive module to meet demands of dynamic environments. Our architecture is designed in such a way that those three elements interact in order to complement each other’s weaknesses and reinforce each other’s strengths.</p>  +
<p>When walking on inclined ground the biological foot adjusts the ankle angle accordingly. Prosthetic foot users have often a limited range of motion in their ankle which makes walking on hills uncomfortable. This paper describes a system which can autonomously correct the ankle angle to the inclination. The ground angle is estimated using an accelerometer. The angle foot blade to heel is then adjusted with a DC-motor. Since the controller only activates the motor when the foot is lifted and thus not loaded, a small powered system can be used.</p>  +
<p>This paper presents an overview of an autonomous robotic system for material handling. The system is being developed by extending the functionalities of traditional AGVs to be able to operate reliably and safely in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires runtime object detection and tracking. Another requirement to be fulfilled by the system is the ability to generate trajectories dynamically, which is uncommon in industrial AGV systems.</p>  +
<p>This paper is concerned with the problem of image analysis based detection of local defects embedded in particleboard surfaces. Though simple, but efficient technique developed is based on the analysis of the discrete probability distribution of the image intensity values and the 2D discrete Walsh transform. Robust global features characterizing a surface texture are extracted and then analyzed by a pattern classifier. The classifier not only assigns the pattern into the quality or detective class, but also provides the certainty value attributed to the decision. A 100% correct classification accuracy was obtained when testing the technique proposed on a set of 200 images.</p>  +
<p>Multiple classifiers consist of sets of subclassifiers, whose individual predictions are combined to classify new objects. These approaches attract an interest of researchers as they can outperform single classifiers on wide range of classification problems. This paper presents an experimental study of using the rule induction algorithm MODLEM in the multiple classifier scheme called combiner, which is a specific meta learning approach to aggregate answers of component classifiers. Our experimental results show that the improvement of predictive accuracy depends on the independence of errors made by the base classifiers. Moreover, we summarise our experience with using MODLEM as component in other multiple classifiers, namely bagging and n2 classifiers.</p>  +
<p>The main objective of this paper is detection, recognition, and abundance estimation of objects representing the Prorocentrum minimum (Pavillard) Schiller (P. minimum) species in phytoplankton images. The species is known to cause harmful blooms in many estuarine and coastal environments. The proposed technique for solving the task exploits images of two types, namely, obtained using light and fluorescence microscopy. Various image preprocessing techniques are applied to extract a variety of features characterizing P. minimum cells and cell contours. Relevant feature subsets are then selected and used in support vector machine (SVM) as well as random forest (RF) classifiers to distinguish between P. minimum cells and other objects. To improve the cell abundance estimation accuracy, classification results are corrected based on probabilities of interclass misclassification. The developed algorithms were tested using 158 phytoplankton images. There were 920 P. minimum cells in the images in total. The algorithms detected 98.1% of P. minimum cells present in the images and correctly classified 98.09% of all detected objects. The classification accuracy of detected P. minimum cells was equal to 98.9%, yielding a 97.0% overall recognition rate of P. minimum cells. The feature set used in this work has shown considerable tolerance to out-of-focus distortions. Tests of the system by phytoplankton experts in the cell abundance estimation task of P. minimum species have shown that its performance is comparable or even better than performance of phytoplankton experts exhibited in manual counting of artificial microparticles, similar to P. minimum cells. The automated system detected and correctly recognized 308 (91.1%) of 338 P. minimum cells found by experts in 65 phytoplankton images taken from new phytoplankton samples and erroneously assigned to the P. minimum class 3% of other objects. Note that, due to large variations of texture and size of P. minimum cells as well as- background, the task performed by the system was more complex than that performed by the experts when counting artificial microparticles similar to P. minimum cells.</p>  
<p>The Ez.ast; algorithm is a path planning method capable of dynamic replanning and user-configurable path cost interpolation. It calculates a navigation function as a sampling of an underlying smooth goal distance that takes into account a continuous notion of risk that can be controlled in a fine-grained manner. E* results in more appropriate paths during gradient descent. Dynamic replanning means that changes in the environment model can be repaired to avoid the expenses of complete replanning. This helps compensating for the increased computational effort required for interpolation. We present the theoretical basis and a working implementation, as well as measurements of the algorithm's precision, topological correctness, and computational effort. © 2005 IEEE.</p>  +
<p>It is desirable for an engine control system to maintain a stable combustion. A high combustion variability (typically measured by the relative variations in produced work, COV(IMEP)) can indicate the use of too much EGR or a too lean air-fuel mixture, which results in less engine efficiency(in terms of fuel and emissions) and reduced driveability. The coefficient of variation (COV) of the ion current integral has previously been shown in several papers to be correlated to the coefficient of variation of IMEP for various disturbances (e.g. AFR, EGR and fuel timing). This paper presents a cycle-to-cycle ion current based method of estimating the approximate category of IMEP (either normal burn, slow burn, partial burn or misfire) for the case of lean air-fuel ratio. The rate of appearance of the partial burn and misfire categories is then shown to be well correlated with the onset of high combustion variability(high COV(IMEP)). It is demonstrated that the detection of these categories can result in faster determination(prediction) of high variability compared to only using the COV(Ion integral).</p>  +
<p>Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.</p>  +
<p>In the era of big data, considerable research focus is being put on designing efficient algorithms capable of learning and extracting high-level knowledge from ubiquitous data streams in an online fashion. While, most existing algorithms assume that data samples are drawn from a stationary distribution, several complex environments deal with data streams that are subject to change over time. Taking this aspect into consideration is an important step towards building truly aware and intelligent systems. In this paper, we propose GNG-A, an adaptive method for incremental unsupervised learning from evolving data streams experiencing various types of change. The proposed method maintains a continuously updated network (graph) of neurons by extending the Growing Neural Gas algorithm with three complementary mechanisms, allowing it to closely track both gradual and sudden changes in the data distribution. First, an adaptation mechanism handles local changes where the distribution is only non-stationary in some regions of the feature space. Second, an adaptive forgetting mechanism identifies and removes neurons that become irrelevant due to the evolving nature of the stream. Finally, a probabilistic evolution mechanism creates new neurons when there is a need to represent data in new regions of the feature space. The proposed method is demonstrated for anomaly and novelty detection in non-stationary environments. Results show that the method handles different data distributions and efficiently reacts to various types of change. © 2018 The Author(s)</p>  +
<p>Outer hair cells (OHC) in the cochlea of the inner ear, together with the local structures of the basilar membrane, reticular lamina and tectorial membrane, constitute the adaptive primary filters (PF) of the second order. We used them for designing a serial-parallel signal filtering system. We determined a rational number of PF included in Gaussian channels of the system, summation weights of the output signals, and distribution of PF along the basilar membrane. A Gaussian channel consisting of five PF is presented as an example, and properties of the channel operating in the linear and non-linear mode are determined during adaptation and under efferent control. The results suggest that application of biological filtering principles can be useful for designing of cochlear implants with new strategies of speech encoding.</p>  +