Research Terms
Engineering Bioengineering Neuroengineering Electrical Engineering Communications Engineering Command-Control-Communications and Intelligence
Industries
Support Activities for Air Transportation Telecommunications Biotech
Dr. Principe has written 5 books and more than 600 referred publications
This modularized machine learning model improves the accuracy of deep neural network (DNN) models. The standard methodology for machine learning is through backpropagation (BP) algorithms. While backpropagation allowed tuning the parameters of multilayer neural networks directly from data in supervised mode, it also was responsible for weaknesses that made backpropagation less than optimal. These frameworks use digital signal processors and GPU units, which limit size and energy efficiency. With a global market for mobile traffic of about 47.6 million terabytes per month in 2020 and an estimated 220.8 million terabytes per month by 2026, there presents a need for innovative algorithms in machine learning to handle the size of the data at a sufficient rate.
Researchers at the University of Florida have developed a framework to adapt and identify Multiple Input Multiple Output (MIMO) large and multimodal engineering processing plants based on the theory of maximal correlation that produces modular training and has superior accuracy when compared to backpropagation, while avoiding the weaknesses of that technology (end to end training, lack of explainability). The Maximal Correlation Algorithm (MCA) estimates the statistical dependence of the samples.
Machine learning for system identification, online learning from data streams produced by sensors in large engineering plants
Maximal correlation algorithm is a new perspective for adaptive and learning systems that estimates the statistical dependence or relatedness between variables, without forming an error. Maximal correlation algorithm unifies the model’s mapping function and cost function and allows for modularized training of mappers with hidden layers as Multi-Layer Perceptrons (MLP). The machine-learning model uses adaptive signal filter analysis, employing all statistical information about the model outputs and the desired signals. Thus, researchers have established a new framework to adapt and identify MIMO systems based on statistical dependence.
This ambulatory brain state advisory system determines and displays current brain states of a patient for more effective treatment. Electroencephalogram (EEG) is a valuable tool for research and diagnosis and are used to diagnose and monitor epilepsy, sleep disorders, comas, and the depth of amnesia in patients. In the United States, more than 40 million adults suffer from sleep disorders while 3 million suffer from epilepsy. University of Florida researchers have created an ambulatory system to monitor brain-wave activity, classifying existing conditions for the patient, allowing for more effective treatment than what is available. The EEG brain state advisor is patient-specific, adjusting the basis of the EEG decomposition to each patient’s brain activity, which improves performance, and allows for the estimation of a given condition. The advisory system output can be displayed on a portable, handheld device or a watch-form device, making it adaptable for ambulatory uses.
Monitors and quantifies the current brain state of a patient
EEGs are used by physicians to test electrical activity in the brain of a patient for medical evaluation. Researchers at the University of Florida have created a system for monitoring brainwave activity that automatically calibrates to the patient, thus acting as a patient-specific brain state advisor. First, the system uses time-series decomposition that corresponds to a model of the patient’s brain activity, describing the collected brainwaves in terms of phasic events. These phasic events are then used within a framework to project points to separate abnormal and normal brain states, or brain states that correspond to different stimuli. Finally, the projection is used to create a probabilistic model for each clinical diagnostic condition of interest. This works by using landmark points, in which the distance to a landmark point for a certain condition would predict the likelihood of that condition. The past and current brain states are displayed alongside the landmark points, providing a patient-specific brain state advisory.
This implantable neural electrode device serves as a potential therapy for patients suffering from a host of neurological disorders in the central or peripheral nervous system. According to a report by the World Health Organization, neurological disorders ranging from migraines to epilepsy and dementia, affect up to one billion people worldwide, and that number is expected to rise as populations age. Researchers at the University of Florida have developed a device that far surpasses any available therapy because it enables unprecedented miniaturization and minimal power consumption. This wireless, implantable system increases health safety and convenience compared to existing external, bulky devices.
A technology in neuroprosthetics that serves as a potential therapy for patients suffering from a host of neurological disorders in the central or peripheral nervous system
This fully-integrated, implantable neural electrode system has low power consumption. The system utilizes an interface with neural tissue for recording, as well as stimulating, neural activity in a research subject or patient. With the use of the flexible substrate as a hybrid platform to integrate the electrodes, the amplification and signal processing electronics, and the wireless transmission and power management electronics, this therapy far surpasses anything currently available. The electrodes are integrated as a single unit with the flexible substrate while the electronics are optimized separately and then hybrid packaged. Constructing the component separately allows for more efficient and cost effective fabrication.
This system uses correntropy-based signal processing to separate noise components from information-carrying components, enabling detection of low-level periodic signals within noisy signals. Detecting weak signals in noisy environments is a key challenge for many communications, surveillance, or monitoring systems. Signal processors use a variety of filtering techniques to separate noise components from information-carrying components, but these vary in effectiveness. Principal component analysis (PCA) is a common filtering procedure that decomposes a signal into multiple principle components, separating out the information-carrying components. However, standard PCA is data-dependent, requiring external data. Therefore, it may not effectively separate the information-carrying signal from the noise, depending on the components and available signal data.
Researchers at the University of Florida have developed a data-independent signal processing system that uses a correntropy function to generate a nonlinear autocorrelation matrix. The system may then apply temporal PCA to separate the signal components and analyze them based on energy levels without the need for external data.
Signal processing system that detects information-carrying signals in high noise environments
This signal processing system generates a nonlinear mapping of a highly noisy signal through use of correntropy, a nonlinear measure of similarity. A data-independent correntropy kernel generates a nonlinear signal mapping that in turn generates a nonlinear autocorrelation matrix for subsequent use in temporal principal component analysis. After selecting the principal component with the highest energy, the system performs power spectral density (PSD) analysis on it. The maximum peak corresponds to the noise-obscured, information-carrying signal, which may then filter out for interpretation. Alternatively, the system can perform a PSD analysis on any individual component in order to extract a signal of interest.
This pulse-based arithmetic unit uses an adaptive network to process auditory signals in digital speech processors that are largely employed in portable electronic devices. Digital speech processing involves the conversion of sound waves to digital signals that can be analyzed by a computing device. Digital speech processing is a widespread technology utilized in such fields as military devices, health equipment, and mobile computing. The global voice and speech recognition market is currently estimated at over $250 million, with a 22% CAGR to 2019, and additional models propose that by 2024 the market is projected to approach $5 billion. Existing speech recognition models, including Hidden Markov Models, prove to be highly complex and thus lack adaptability. Other existing speech recognition systems, such as Apple’s Siri, require cellular or wireless connection to process auditory signals. Researchers at the University of Florida have engineered a speech recognition technology that processes auditory signals at an improved rate while significantly decreasing power usage and size. Additionally, this technology requires no internet connection and can be specifically configured for each individual. This pulse-based digital signal processing technology may be extended to applications including health monitoring and military services.
Pulse-based computation for optimized power consumption and adaptability in digital speech recognition
This device uses pairs of adjacent pulses coupled with the Kernel Adaptive Autoregressive Moving Average (KAARMA) model to process an auditory signal in a clear and efficient manner. The initial auditory signal is converted into a pulse train that is broken up into a series of fragments. Each conversion precisely occurs due to a predefined correlation between sound frequency and plurality of pulse trains. Each fragment of the auditory signal is then processed by applying its resulting pulse train to a KAARMA network. The digital signal processing system then identifies the spoken words according to the pulse-train / KAARMA network relationship, and responds accordingly.
This filter utilizes hidden state estimation with non-Gaussian uncertainties to separate and elucidate moving objects from a non-stationary background in video sequences. For example, in video surveillance applications, it is very important to detect new moving objects entering the breadth of the camera and separate foreground objects from the background allowing the possibility to detect sudden changes in the scene or enable the machine to track a moving object or objects. Typically, the Kalman Filter provides the most accurate background estimation; however, this estimation takes only random Gaussian variables into account. Thus, Kalman Filter doesn’t perform well in non-Gaussian settings, exposing a need for a new filter that can extract higher order information from signals. University of Florida researchers have designed an adaptive background estimation for video sequences that is based on the use of correntropy instead of the conventional mean squared error, allowing the devised filter to use higher order statistics. This filter also can be employed in background modeling in real-time sports footage to extract foreground objects and in monitoring traffic on highways and roads.
Filters moving objects from non-stationary background to detect sudden changes and track moving objects
This filter using hidden state estimation proposes a computable function based on statistical theory dealing with the limits and efficiency of information processing. The function utilizes the similarity measure correntropy as a performance index, which is directly related to the probability of how similar two random variables are in a neighborhood of joint space. Researchers at the University of Florida assumed that there was a filter for each pixel of the video sequence, and that each pixel is defined by three continuous hidden states or color values. An identity matrix within the function uses these hidden states to classify the background and separate objects from the background. This identity matrix does not expect the states to change because the background color doesn’t change rapidly. Initially the background can be set in the first frame, and then the filter will work with each incoming frame unsupervised. As a result the filter manages to extract the background, eliminate noise, and adapt to the changes in the background scene. The following videos demonstrate using correntropy to separate moving objects in foreground from non-stationary background in video sequences. In both videos, the upper left window is the original video feed, the lower right window is the estimated background, and the upper right window is the difference between these two windows that correspond to the foreground objects. The first video shows how correntropy could be used for security purposes. The second video shows how correntropy could be used in sporting events.
This pulse-based arithmetic unit uses a more efficient implementation for the integrate and fire analog-to-pulse converter to decrease area and power consumption of traditional digital signal processors, which are largely employed in mobile computing devices. Mobile computing is an interaction involving a portable human-computer communication device, such as a smartphone, tablet, or notebook computer. The mobile computing and portable devices market reached $830 million in 2014 and is expected to reach nearly $5.2 billion in 2020. Available devices are still using conventional digital signal processors and digital arithmetic units which limit size and energy efficiency. Researchers at the University of Florida have engineered a pulse-based arithmetic unit that utilizes timing of pulse trains to decrease both the area and power consumption of conventional digital signal processors, thus greatly optimizing the future of mobile computing.
Arithmetic pulse trains for optimized power consumption in mobile computing and portable devices
This device uses pairs of adjacent pulses, which correspond to areas under the curves of respective analog signals, in pulse trains produced by two independent integrate and fire converters (IFC). These pulse trains are then utilized to estimate the corresponding addition or multiplication sequences of the instantaneous amplitudes of the pair of analog signals passed along to the IFCs. This pulse device is comprised of a time-to-counts converter,, which is configured to convert the integrate and fire sampler pulse timing of the pulse train input into corresponding digital counts, which are then used by digital signaling processors.
This signal processing technique uses a correntropy statistical model to filter noisy, weak, or distorted digital signals. Signal processing filters extract meaningful information from signals corrupted with noise, benefitting a variety of fields including digital communications, seismology, and biomedical engineering. In the digital communications field, for example, all cell towers use filters to improve their signal. In the US alone, cell phone companies have invested over $177 billion in cell towers since 2010, with over 300,000 cell towers in 2016. For most forms of noise, nonlinear filters provide optimal signal recovery. The Volterra series approximation is one attempt at creating nonlinear filter solutions, but the solutions it generates are complex and involve numerous coefficients. Other solutions include nonlinearly transforming an input and then computing a regression of the output. These solutions require a considerable amount of computation, making them impractical for real world applications. Researchers at the University of Florida have developed a signal processing technique that uses a statistical model based on nonlinear correntropy to filter digital signals. This processor filters weak or noisy input signals and outputs clear amplified signals.
Nonlinear correntropy filter that improves processing of noisy digital signals
This filter processes digital signals by using a correntropy statistical model. Creation of the correntropy filter involves generating a correntropy statistic based on a kernel function and determining filter weights based on this statistic. The filtering process starts when the processor receives an input that may include multiple, scattered, noisy, distorted, or low intensity signals. The processor then cleans up the input using the nonlinear correntropy filter and generates an output comprising a best-fit prediction of the actual signal without noise or distortions. This output signal is clearer and more amplified.
This energy efficient, pulse-based automatic gain control can be utilized with both pulse-based and analog signal processing systems. Automatic gain control circuits are used in electronic systems such as communication receivers, radar, and audio and video systems, where the level of an input signal fluctuates over wide dynamic ranges. This market is expected to reach $50 billion in 2017. Available automatic gain control systems use amplitude or modulation to detect parameters, providing a controlled signal output even when the input signal varies from strong to weak. Researchers at the University of Florida have created an automatic gain control that uses integrate-and-fire sampler to detect timing between integrate-and-fire pulses to then vary the gain. This enables control of both analog amplitude and timing of the integrate-and-fire pulse train at the same time.
Automatic gain control that can be used in both analog and pulse-based signal processing systems
This automatic gain control uses the integrate-and-fire sampler to detect the timing in between the integrate-and-fire pulses instead of detecting parameters based on amplitude or modulation. The integrate-and-fire timing is then used to vary the gain. Including the integrate-and-fire sampler in the automated gain control loop enables control of both analog and pulse-based signal processing systems. The integrate-and-fire sampler, one of the main components of the automatic gain control, consumes extremely low power; it can be used in the frontend of low-power analog signal processing systems.
The Correntropy Loss, or C-loss, function stabilizes signals in artificial neural networks to optimally classify random data points and resist the presence of outliers in a given sample. Artificial neural networks are statistical models that process information in a way comparable to biological neural networks. Correntropy (correlative entropy) is a nonlinear measurement of similarity between two random variables in a data set. Current existing functions for data classification, including Mean Squared Error (MSE) Loss, contain unfavorable noise levels due to the inclusion of outliers and, therefore, commonly formulate poor models for predicting future trends, an occurrence known as overfitting. These problematic data approximations decrease the success of artificial neural network classification utilized in medical diagnoses, data mining applications, financial data domains, and the informational technology (IT) industry. These losses may result in detrimental miscalculations including medical misdiagnosis, considerable financial losses for companies and private investors, and stunted technological growth. Researchers at the University of Florida have discovered that the Correntropy Loss function demonstrates indifference towards immaterial noise and eliminates the trend bias created by outliers in a data set, thus creating a smooth and accurate function approximation for robust training in artificial neural networks.
C-Loss function in data mining improves data classification in artificial neural networks
The Correntropy Loss function is an advanced statistical algorithm utilized for precise data mining and classification in artificial neural networks. The MSE-Loss function is currently one of the most common formulas utilized for nonlinear, non-Gaussian distribution data in computational network classification; however, this formula does not take outlier data into account and is limited by complexity. The Correntropy loss function, when combined with the MSE-loss function, constructs an algorithm for precise data classification and mining in artificial neural networks. The C-Loss function is insensitive to outliers and resilient to overfitting, and utilizes the application of Correntropy to a known set of values and training samples to obtain a discriminant function. That function is then employed to accurately predict the performance of test values. Since a plethora of current database systems employ the MSE-loss function for training, a simple switch over to the C-loss function may prove beneficial in terms of improving measurement accuracy, robust training, data mining, and data classification in neural networks.
This diagnostic pulse based algorithm isolates the needed QRS complex and filters noise from electrocardiogram signals, while reducing size and power consumption. The QRS complex is the graphical deflections seen on an electrocardiogram, the standard tool used to monitor heart function. Regular monitoring can lead to early detection of potentially fatal cardiovascular signals. Cardiovascular disease, the leading cause of death in the world, caused an estimated 30 percent of all deaths in 2008. By 2030, more than 23 million people worldwide will die annually from cardiovascular disease. Many of these deaths are preventable with treatment and proper detection using signals such as the electrocardiogram. Available electrocardiogram technologies that isolate the needed QRS signal from noise are based on digital signal processors with bulky circuitry and require large power consumption to recognize the QRS signals. This diagnostic tool developed by UF researchers uses integrate and fire pulse train finite state machines to automatically interpret electrocardiogram signals and identify potential health threats, while drastically reducing the size and power consumption needed in continuous ambulatory heart monitoring.
Continuous ambulatory monitoring diagnostic tool for QRS complex detection
Integrate and fire sampling encodes a signal in a series of time events rather than as uniformly spaced amplitude values, reducing power consumption by two orders of magnitudes versus existing digital signal processing. It also leads to revolutionary new ways to build ultra low power devices based on finite state automata that recognize events of interest in physiologic monitoring. The technology is fully compatible with existing electrocardiogram designs, replacing existing spacious and power-demanding DSP circuitry, and suffers no loss of performance while providing ultra-low power consumption.
This algorithm trains adaptive systems to efficiently identify and filter undesired audio (noise) during audio capture. More than 300 million cell phones are in use in the United States right now. That accounts for only 20 percent of the world’s cell phone usage. Noise impeding cell phone use can come from a variety of sources, including machine engines, vacuum cleaners, or other people. Current noise-cancelling microphones utilize differential microphone topology, including two microphones, one closer to the audio source to identify the primary audio signal and the other identifying ambient noise. This technology often has a narrow range of filtration and does not adapt easily to a sudden change in the noise signal. Researchers at the University of Florida have developed an algorithm using a correntropy cost function to improve the accuracy and efficiency of adaptive systems in noise-cancelling microphones. This technology trains adaptive systems to recognize and adjust for real-time changes, reducing the detrimental effects of outliers and impulsive noise. The adaptive system is useful in a variety of signal processing applications including channel equalization, noise cancellation, and system modeling.
Algorithm using correntropy provides robust training of adaptive systems to improve signal processing applications
This algorithm is used for robust training of adaptive systems in noise-cancelling microphones. The adaptive system is configured to learn the parameters of the filter by using a correntropy measure between a primary input and the output of the filter. Correntropy is a measure of the similarity of two random variables within a small neighborhood. By implementing a cost function (i.e. criterion function), learning algorithm, and adaptive filter, this algorithm processes a reference signal through two separate filters. One reference filter is combined with the primary signal to identify the desired signal, which is compared to the second reference filter and yields a cost function signal. This is used to make adjustments to the adaptive filter to optimize the desired signal, eliminating unwanted noise.
This machine-learning framework develops and trains machine learning models without the need for a training data set that is fully-labeled. In classification, designing learning algorithms that can produce performance models using less human supervision is a long-sought goal. A pervasive practice in supervised learning for machine learning models is to train a classifier using data label pairs for every sample in a training set. Obtaining large sets of fully-labeled training data is expensive, time-consuming, and inefficient. But the industry has yet to address the technical challenges related to providing processes and systems for efficiently and/or directly obtaining data that captures sufficient label information relevant for developing and training machine-learning models.
Researchers at the University of Florida have developed a machine-learning classification framework utilizing sufficiently labeled data. Inspired by the principle of sufficiency in statistics, sufficiently-labeled data presents a summary of the fully-labeled training set. It captures the relevant information for classification, while being easier to obtain directly from annotators and preserving user privacy.
A development framework enables training of a machine learning model without needing to utilize full-labeled training data sets
Researchers at the University of Florida developed a framework for training machine-learning models. It comprises a hidden module and an output module, configured for predicting the plurality of the original labels. The machine-learning model then uses the sufficiently labeled data for training, via one or more processors. This automatically provides the trained machine learning model for use in prediction tasks. The framework is an alternative view on neural networks that turn the layers into linear models in feature spaces, with demonstrated benefit of workflow in a transfer learning setting.
This object recognition algorithm is capable of discriminating objects in videos without requiring extensive training as do most available methods. Based on a deep learning architecture normally developed for images, it provides video processing and object tracking to aid in computer vision applications such as self-driving cars, automated military drones, and surveillance. In 2012, the computer vision market was valued at $4.37 billion. Automated object recognition is classically a complex field requiring the specification of the large number of variations an object can have in an environment, including position, rotation, and scale. Many object recognition algorithms capture and process the entire image at once, losing the finer detail and requiring high processing power. Researchers at the University of Florida have developed an unsupervised object recognition algorithm in video that doesn’t require extensive training. The algorithm narrows the targeted data, reducing the amount of information and power necessary for processing.
Algorithm for natural object recognition in video
This object recognition software is based on deep learning but uses a dynamic model to handle video processing. Therefore it is capable of processing the large numbers of variations an object can have in an environment. The variations include scale, rotation, position, etc. The model sparsely represents the observations, analyzes parts of the input data independently and combines them in a hierarchical fashion with top down information. The inputs from the images are processed before being combined to form a globally invariant representation. These invariant representations can then be fed to a classifier for robust object recognition.
This brain-machine interface (BMI) software and equipment finds the functional connection between brain activity and intended physical action. Available technology requires the patient to have the ability to move to create such a connection. However, patients may have limited or no movement capabilities for a variety of reasons, including stroke, paralysis or amputation. Each year, 800,000 Americans suffer strokes, and as many as 20,000 experience traumatic spinal cord injuries. The military efforts in Iraq and Afghanistan already have resulted in more than 1,700 major amputations for American soldiers as well. Researchers at the University of Florida have created an architecture that provides the learning necessary to use BMI software and equipment for patients who are physically unable to move. This invention addresses an unmet need for paralyzed patients or patients with other motor neuropathies, such as amputees and stroke patients, who are unable to generate the movement trajectories necessary for BMI training. The mechanism underlying this architecture is learning control policies using feedback from the external environment. In addition to learning control without movements, this creates a bridge to adapting control in a dynamic environment. This is a key challenge in brain machine interfaces and adaptive algorithms in general.
Software and equipment that coadapts to translate neural activity into physical movement for patients with paralysis or prosthetics
This BMI architecture translates neural activity into goal-directed behaviors without first having to map the patient’s movement to control computers or prosthetic devices. Available technologies require a patient to physically make movements to train the BMI control systems. UF researchers have developed a semi-supervised BMI control architecture that uses reinforcement learning to co-adaptively find the neural state to motor mapping in goal-directed tasks. Here the algorithm is able to learn from a noisy control signal (the patient’s brain) and a changing environment. Instead of imposing rigid and often movement-based mappings, the system can coadapt to consistently make the most beneficial decisions. This breakthrough could improve quality of life for a large population of patients who could benefit from prosthetic technologies.