Property:Abstract
From ISLAB/CAISR
Jump to navigationJump to searchThis is a property of type Text.
P
<p>For the alignment of two fing erprints position of certain landmarks are needed. These should be automatically extracted with low misidentification rate. As landmarks we suggest the prominent symmetry points (core-points) in the fing erprint. They are extracted from the complex orientation field estimated from the global structure of the fingerprint, i.e. the overall pattern of the ridges and valleys. Complex filter s, applied to the orientation field in multiple resolution scales, are used to detect the symmetry and the type of symmetry. Experimental results are reported.</p> +
<p>Two-dimensional gel electrophoresis is the preferred method for simultaneously separating and visualising thousands of proteins. An important part of the computer aided analysis of the proteome is the ability to automatically detect, identify, and quantify the proteins by means of automatic Image processing. We present a fast and sensitive method for protein spot detection using the Circular Symmetry Tensor. It is based upon the work of Bigun on Symmetry derivatives of Gaussians.</p> +
<p>Novel prototype-based framework for image segmentation is introduced and successfully applied for cell segmentation in microscopy imagery. This study is concerned with precise contour detection for objects representing the Prorocentrum minimum species in phytoplankton images. The framework requires a single object with the ground truth contour as a prototype to perform detection of the contour for the remaining objects. The level set method is chosen as a segmentation algorithm and its parameters are tuned by differential evolution. The fitness function is based on the distance between pixels near contour in the prototype image and pixels near detected contour in the target image. Pixels “of interest correspond to several concentric bands of various width in outer and inner areas, relative to the contour. Usefulness of the introduced approach was demonstrated by comparing it to the basic level set and advanced Weka segmentation techniques. Solving the parameter selection problem of the level set algorithm considerably improved segmentation accuracy.</p> +
<p>Biometric systems, such as face tracking and recognition, are increasingly being used as a means of security in many areas. The usability of these systems depend not only on how accurate they are in terms of detection and recognition but also on how well they withstand attacks. In this paper we developed a text-driven face-video signal from the XM2VTS database. The synthesized video can be used as a means of playback attack for face detection and recognition systems. We use Hidden Markov Model to recognize the speech of a person and use the transcription file for reshuffling the image sequences as per the prompted text. The discontinuities in the new video are significantly minimized by using a pyramid based multi-resolution frame interpolation technique. The playback can also be used to test liveness detection systems that rely on lip-motion to speech synchronization and motion of the head while posing/speaking. Finally we suggest possible approaches to enable biometric systems to stand against this kind of attacks. Other uses of our results include web-based video communication for electronic commerce.</p> +
<p>Reliable feature extraction is crucial for accurate biometric recognition. Unfortunately feature extraction is hampered by noisy input data, especially so in case of fingerprints. We propose a method to enhance the quality of a given fingerprint with the purpose to improve the recognition performance. A Laplacian like image-scale pyramid is used for this purpose to decompose the original fingerprint into 3 smaller images corresponding to different frequency bands. In a further step, contextual filtering is performed using these pyramid levels and 1D Gaussians, where the corresponding filtering directions are derived from the frequency-adapted structure tensor. All image processing is done in the spatial domain, avoiding block artifacts while conserving the biometric signal well. We report on comparative results and present quantitative improvements, by applying the standardized NIST FIS2 fingerprint matcher to the FVC2004 fingerprint database along with our as well as two other enhancements. The study confirms that the suggested enhancement robustifies feature detection, e.g. minutiae, which in turn improves the recognition (20% relative improvement in equal error rate on DB3 of FVC2004).</p> +
Q
<p>Image degradations can affect the different processing steps of iris recognition systems. With several quality factors proposed for iris images, its specific effect in the segmentation accuracy is often obviated, with most of the efforts focused on its impact in the recognition accuracy. Accordingly, we evaluate the impact of 8 quality measures in the performance of iris segmentation. We use a database acquired with a close-up iris sensor and built-in quality checking process. Despite the latter, we report differences in behavior, with some measures clearly predicting the segmentation performance, while others giving inconclusive results. Recognition experiments with two matchers also show that segmentation and matching performance are not necessarily affected by the same factors. The resilience of one matcher to segmentation inaccuracies also suggest that segmentation errors due to low image quality are not necessarily revealed by the matcher, pointing out the importance of separate evaluation of the segmentation accuracy. © 2013 IEEE.</p> +
<p>This is an excerpt from the content</p><p><strong>Synonyms</strong></p><p>Quality assessment; Biometric quality; Quality-based processing</p><p><strong>Definition</strong></p><p>Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms (1). Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition (2).</p><p>During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals (3). This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks (4). One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with <strong>at-a-distance and on-the-move acquisition</strong> capabilities. These will require robust algorithms capable of handling a range of changing characteristics (2). Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.</p><p>There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.</p> +
<p>In this paper, we propose quality function for an unsupervised neural classification. The function is based on the third order polynomials. The objective of the quality function is to find a place of the input space sparse in data points. By maximising the quality function, we find decision boundary between data clusters instead of centres of the clusters. The shape and place of the decision boundary are rather insensitive to the magnitude of the weight vector established during the maximisation process. A superiority of the proposed quality function over other similar functions as well as conventional clustering algorithms tested has been observed in the experiments. The proposed quality function has been successfully used for colour image segmentation.</p> +
<p>The usefulness of questionnaire and voice data to screen for laryngeal disorders is explored. Answers to 14 questions form a questionnaire data vector. Twenty-three variables computed by the commercial "Dr.Speech" software from a digital voice recording of a sustained phonation of the vowel sound/a/constitute a voice data vector. Categorization of the data into a healthy class and two classes of disorders, namely diffuse and nodular mass lesions of vocal folds is the task pursued in this work. Visualization of data and automated decisions is also an important aspect of this work. To make the categorization, a support vector machine (SVM) is designed based on genetic search. Linear as well as nonlinear canonical correlation analysis (CCA) is employed, to study relations between the questionnaire and voice data sets. The curvilinear component analysis, performing nonlinear mapping into a two-dimensional space, is used for visualizing data and decisions. Data from 240 patients were used in the experimental studies. It was found that the questionnaire data provide more information for the categorization than the voice data. There are 3-4 common directions along which the statistically significant variations of the questionnaire and voice data occur. However, the linear relations between the variations occurring in the two data sets are not strong. On the other hand, very strong linear relations were observed between the nonlinear variates obtained from the questionnaire data and linear ones computed from the voice data. Questionnaire data carry great potential for preventive health care in laryngology. © 2011 Elsevier Ltd. All rights reserved.</p> +
R
<p>This paper is concerned with soft computing techniques-based noninvasive monitoring of human larynx using subject’s questionnaire data. By applying random forests (RF), questionnaire data are categorized into a healthy class and several classes of disorders including: cancerous, noncancerous, diffuse, nodular, paralysis, and an overall pathological class. The most important questionnaire statements are determined using RF variable importance evaluations. To explore data represented by variables used by RF, the t-distributed stochastic neighbor embedding (t-SNE) and the multidimensional scaling (MDS) are applied to the RF data proximity matrix. When testing the developed tools on a set of data collected from 109 subjects, the 100% classification accuracy was obtained on unseen data in binary classification into the healthy and pathological classes. The accuracy of 80.7% was achieved when classifying the data into the healthy, cancerous, noncancerous classes. The t-SNE and MDS mapping techniques applied allow obtaining two-dimensional maps of data and facilitate data exploration aimed at identifying subjects belonging to a “risk group”. It is expected that the developed tools will be of great help in preventive health care in laryngology.</p> +
<p>The relation between heat demand and outdoor temperature (heat power signature) is a typical feature used to diagnose abnormal heat demand. Prior work is mainly based on setting thresholds, either statistically or manually, in order to identify outliers in the power signature. However, setting the correct threshold is a difficult task since heat demand is unique for each building. Too loose thresholds may allow outliers to go unspotted, while too tight thresholds can cause too many false alarms.</p><p>Moreover, just the number of outliers does not reflect the dispersion level in the power signature. However, high dispersion is often caused by fault or configuration problems and should be considered while modeling abnormal heat demand.</p><p>In this work, we present a novel method for ranking substations by measuring both dispersion and outliers in the power signature. We use robust regression to estimate a linear regression model. Observations that fall outside of the threshold in this model are considered outliers. Dispersion is measured using coefficient of determination R2 which is a statistical measure of how close the data are to the fitted regression line.</p><p>Our method first produces two different lists by ranking substations using number of outliers and dispersion separately. Then, we merge the two lists into one using the Borda Count method. Substations appearing on the top of the list should indicate higher abnormality in heat demand compared to the ones on the bottom. We have applied our model on data from substations connected to two district heating networks in the south of Sweden. Three different approaches i.e. outlier-based, dispersion-based and aggregated methods are compared against the rankings based on return temperatures. The results show that our method significantly outperforms the state-of-the-art outlier-based method. © 2018 The Authors. Published by Elsevier Ltd.</p> +
<p>Unscheduled 30-day readmissions are a hallmark of Congestive Heart Failure (CHF) patients that pose significant health risks and escalate care cost. In order to reduce readmissions and curb the cost of care, it is important to initiate targeted intervention programs for patients at risk of readmission. This requires identifying high-risk patients at the time of discharge from hospital. Here, using real data from over 7,500 CHF patients hospitalized between 2012 and 2016 in Sweden, we built and tested a deep learning framework to predict 30-day unscheduled readmission. We present a cost-sensitive formulation of Long Short-Term Memory (LSTM) neural network using expert features and contextual embedding of clinical concepts. This study targets key elements of an Electronic Health Record (EHR) driven prediction model in a single framework: using both expert and machine derived features, incorporating sequential patterns and addressing the class imbalance problem. We show that the model with all key elements achieves a higher discrimination ability (AUC 0.77) compared to the rest. Additionally, we present a simple financial analysis to estimate annual savings if targeted interventions are offered to high risk patients. © 2019 The Authors</p> +
<p>A robust object/face detection technique processing every frame in real-time (video-rate) is presented. A methodological novelty are the suggested quantized angle features (“quangles”), being designed for illumination invariance without the need for pre-processing, e.g. histogram equalization. This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. Separable filtering and the use of lookup tables favor the detection speed. Furthermore, the gradient may then be reused for other tasks as well. A side effect is that the training of effective cascaded classifiers is feasible in very short time, less than 1 hour for data sets of order 104. We present favorable results on face detection, for several public databases (e.g. 93% Detection Rate at 1×10− 6 False Positive Rate on the CMU-MIT frontal face test set).</p> +
<p>A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify "liveness" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features ("quangles") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose "liveness" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on "liveness" verification barriers.</p> +
<p>Reducing emissions and improving fuel efficiency in automobiles are today important issues. New sensor techniques are developed to extract detailed combustion information to enable closed loop engine control. This thesis is about a virtual sensor; measuring an ion current inside the cylinder by using the already existing spark plug, followed by signal processing for estimation of combustion parameters. First, the thesis aims to show that the ion current signal can be used for closed loop control of Exhaust Gas Recirculation (EGR). Use of EGR is very common in modern automobiles because of the potential reduction of NOx emissions and fuel consumption, but using too much EGR can have the reverse effect (e.g. increased fuel consumption and driveability problems). Algorithms for estimating combustion variability are proposed and a closed loop scheme for controlling an EGR valve is demonstrated for driving on the highway in a SAAB 9000. Estimation of the pressure peak position is treated for closed loop control of ignition timing. Such estimation can be performed with the ion current but may not work if a fuel additive is used. Different methods are compared and it is shown that using a fuel additive may even improve the estimation accuracy of the pressure peak position with about 25%. An algorithm is also proposed to estimate the pressure peak position even in presence of EGR. Strategies for transient control of the air-fuel ratio are also compared. Air-fuel ratio control is important because even small deviations from the stoichiometric value can result in significantly increased emissions. It is found that a neural network based controller had the best performance with approximately 23% lower RMS error than the adapted standard control module.</p> +
Receiving care according to national heart failure guidelines is associated with lower total costs -- an observational study in Region Halland, Sweden +
<p>Aims: Patients with heart failure (HF) have high costs, morbidity and mortality, but it is not known if appropriate pharmacotherapy (AP), defined as compliance with international evidence-based guidelines, is associated with improved. The purpose of this study was to evaluate HF patients’ health care utilization, cost and outcomes in Region Halland (RH), Sweden, and if AP was associated with costs.</p><p>Methods and Results: 5 987 residents of RH in 2016 carried HF diagnoses. Costs were assigned to all healthcare utilization (inpatient, outpatient, emergency department, primary health care and medications) using a Patient Encounter Costing methodology. Care of HF patients cost €58.6M, (€9 790/patient) representing 8.7% of RH’s total visit expenses and 14.9% of inpatient care expenses. Inpatient care represented 57.2% of this expenditure, totaling €33.5M (€5,601/patient). Receiving AP was associated with significantly lower costs, by €1 130 per patient (p < 0.001, 95% Confidence Interval 574,1 687) Comorbidities such as renal failure, diabetes, COPD and cancer were significantly associated with higher costs.</p><p>Conclusion: HF patients are heavy users of healthcare, particularly inpatient care. Receiving AP is associated with lower costs even adjusting for comorbidities, although causality cannot be proven from an observational study. There may be an opportunity to decrease overall costs and improve outcomes by improving prescribing patterns and associated high-quality care. </p><p>Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2020.</p> +
<p>We suggest a set of complex differential operators that can be used to produce and filter dense orientation (tensor) fields for feature extraction, matching, and pattern recognition. We present results on the invariance properties of these operators, that we call symmetry derivatives. These show that, in contrast to ordinary derivatives, all orders of symmetry derivatives of Gaussians yield a remarkable invariance: they are obtained by replacing the original differential polynomial with the same polynomial, but using ordinary coordinates x and y corresponding to partial derivatives. Moreover, the symmetry derivatives of Gaussians are closed under the convolution operator and they are invariant to the Fourier transform. The equivalent of the structure tensor, representing and extracting orientations of curve patterns, had previously been shown to hold in harmonic coordinates in a nearly identical manner. As a result, positions, orientations, and certainties of intricate patterns, e.g., spirals, crosses, parabolic shapes, can be modeled by use of symmetry derivatives of Gaussians with greater analytical precision as well as computational efficiency. Since Gaussians and their derivatives are utilized extensively in image processing, the revealed properties have practical consequences for local orientation based feature extraction. The usefulness of these results is demonstrated by two applications:</p><ol><li>tracking cross markers in long image sequences from vehicle crash tests and</li><li>alignment of noisy fingerprints.</li></ol> +
<p>The research on the OCR technology for the Latin-based scripts has been successful in achieving the status of image scanners with built-in OCR facility. But, a majority of modification-based scripts such as Brahmi descended South Asian or Ethiopic scripts are still progressing to achieve this status. This indicates the difficulties in adopting the recognition methods that have been proposed so far for the Latin-based scripts to modification-based scripts. In this paper we propose a novel method that can be adopted to recognise modification-based printed scripts consisting of a large character set, without the need for prior segmentation. The major strength of this method is that, the direction features that are used as the main principle for recognition, are further used in the separation of confusing characters, detection of skew angle, segmentation of script and graphic objects which substantially improves the computation efficiency. Algorithms developed initially for the Brahmi descended Sinhala script used in Sri Lanka, have been extended successfully for the Ethiopic script which has been evolved in a different geographical region, yielding consistently accurate results. Together, these two scripts are used by a population of ninety million.</p> +
<p>Sinhala characters used in the Sinhala script by over 70% of the 18 million population in Sri Lanka, have been descended from the ancient Brahmi script. The Sinhala alphabet consists of vowels and consonants and the consonants are modified using modifier symbols to give the required vocal sounds. In the process of developing an OCR for the Sinhala script, characters are initially recognised through a multi-level filtering process using the Linear Symmetry (LS) feature (1). The recognised character is then segmented to identify the associated modifier symbol/s. Since the use of LS recognises characters prior to segmentation, the most difficult task of separating touching characters is easily solved. A method to determine the skew angle of the script is also presented. Experiments conducted so far for widely used fonts of different sizes yield encouraging results.</p> +
Recognition of Similarities (ROS) : A Methodological Approach to Analysing and Characterising Patterns of Daily Occupations +
<p>It has been proposed that it should be possible to identify patterns if daily occupations that promote health or cause illness. This study aimed to develop and to evaluate a process for analysing and characterising subjectively perceived patterns of daily occupations, by describing patterns as consisting if main, hidden, and unexpected occupations. Yesterday diaries describing one day if 100 working married mothers were collected through interviews. The diaries were transformed into time-and-occupation graphs. An analysis based on visual interpretation of the patterns was performed. The graphs were grouped into the categories low, medium, or high complexity. In order to identify similarities the graphs were then compared both pair-wise and group-wise. Finally, the complexity and similarities perspectives were integrated, identifying the most typical patterns of daily occupations representing low, medium, and high complexity. Visual differences in complexity were evident. In order to validate the Recognition of Similarities (ROS) process developed, a measure expressing the probability if change was computed. This probability was found to differ statistically significantly between the three groups, supporting the validity of the ROS process.</p> +