Current Research Topics
Graph signal processing
Many current practical problems can be modeled via data signals defined on the nodes of a weighted graph. Social media, traffic, wireless networks, genetic networks, and functional relationship across brain regions are instances of this kind of data. Graphs could also describe similarities among high dimensional data points in statistical learning, being extremely useful to machine learning solutions. Graph signal processing (GSP) extends classical discrete signal processing (DSP) concepts and techniques in order to reveal relevant information about these unstructured data by exploring the underlying topology.
Main Research Focus
Our main goals in this research topic are: (i) to conceive new DSP-inspired analysis tools for GSP, such as new types of transform for spectral analysis; (ii) to propose novel adaptive filters for graphs; (iii) to enhance current vertex-frequency analysis tools; (iv) to investigate new applications that can greatly benefit from the GSP approach.
Satellite communications has been increasingly present in our lifes. Applications such as remote sensoring and self-localization are widely employed in our society. Such systems must be reliable and provide a considerable robustness to failures. In this sense, Intentional or unintentional Radio Frequency Interference (RFI) is an ever-increasing threat to systems. Low power RFI degrade the performance of systems that depend on positioning and timing provided by GNSS. Modern infrastructure increasingly relies on GNSS, making GNSS itself a critical asset to infrastructure. Alleviating its vulnerability to interference by detecting, localizing and eliminating it has become of paramount importance.
Main Research Focus
This current research topic aims aims at employing detection techniques, such as energy-based methods, and machine learning approaches. Considering localization techniques, DOA and TDOA-based methods will be explored in this research. In addition, for the separation task, standard algorithms, e.g., ICA and NMF, will be studied, as well as new techniques.
Massive MIMO, involves the use of multiple (8 – 128) antennae located on the same panel. Massive MIMO, through the use of spatio-temporal multiplexing, makes it possible to increase the data rates, and the use of beamforming ensures that the transmitted energy is focused on a device to improve its link budget.
Massive MIMO system, however, incorporates multiple (as many as the number of antennae) power amplifiers (PAs). The nonlinear characteristics of these PAs distort the signal, thus limiting the overall system performance. Digital Predistortion (DPD) has been widely used for compensating the PA nonlinearities in wireless systems. However, new low-complexity massive MIMO DPD approaches are needed to reduce the overhead associated with using one DPD per antenna requirement based on the conventional approach.
Massive MIMO is one of the technologies examined to be deployed in 5G systems. The next-generation 5G mobile networks aim to sustain the evolution of mobile communications in terms of connectivity, throughput and spectral efficiency, while enhancing the user experience. Along with massive MIMO, Full-duplex, or In-Band Full-Duplex (IBFD), is another method capable of fulfill those aforementioned capabilities.
One of the major challenges of deploying IBFD is the so-called Self Interference (SI) phenomena whereby the transmitted signal, which is much stronger (~106 dB) than the signal the receiver is simultaneously trying to acquire, leaks into the receiver channel.
Main Research Focus
The current research topic aims at developing new efficient MIMO DPD approaches to address the overhead due to the large number ofa antenna elements. Further, machine learning approaches, e.g. deep-learning-based techniques, will be employed for modelling the effects of employing PAs with memory, and, perhaps, for modelling the complete RF front-end characteristics enabling the proposal of new approaches for compensating them. The research work will look at adaptive approaches and transceiver architectures to alleviate the SI enabling widespread deployment of IBFD. There is also scope for incorporating and compensating for other RF impairments within the IBFD transceiver architecture.
These two strands of research might eventually be put together by developing antenna selection and precoding algorithms for massive MIMO tailored to achieve reasonable DPD performance, while trying to minimize the SI induced by IBFD.
Adaptive filtering (AF) plays an inportant role in the context of signal processing. The general applications of AF are: system identification, channel equalization, signal enhancement, and prediction. System identification consists in learning the behavior of a given system by adjusting the elements of the adaptive filter, according to a given function of its output. Such systems can be a wireless channel, electrical motors, or even a chemical plant. Regarding channel equalization, the elements of the adaptive filter are adjusted in order to minimize the effects imposed by the channel, such as multipath and attenuation. In the signal enhancement application, the signal of interest is corrupted by noise. By using a correlated version of this noise, which is usually available, the adaptive filter can be trained to remove it, yielding a clean version of the signal of interest. Lastly, in the signal prediction scenario, the adaptive filter can predict, for instance, the temperature of a city, or the chance of raining in a day, based on the training data behavior.
Adaptive filters are employed in smartphones and in a lot of embedded systems, which require a low-power profile. In this sense, the use of data selective algorithms, which employ the set-membership framework, can reduce the computational complexity of adaptive filters by avoiding unnecessary updates, leading to power savings. In application such as echo cancellation, where the number of elements can be large, sparse approaches should be applied, again, to reduce the computational burden.
Main Research Focus
This current research topic aims at developing efficient techniques to reduce the computational complexity of adaptive filters, while keeping or even enhancing the performance. In addition, investigations regarding sparsity-aware techniques are being conducted.
Transceivers with reduced redundancy
A significant part of physical- and link-layer research in communication systems focuses on either developing new methods or enhancing the existing ones in order to increase throughput. From a practical point of view, these investigations should always take into account the fundamental trade-off between performance gains and cost effectiveness. The computational complexity is amongst the factors that directly affects the cost effectiveness of new advances in communications. This explains why linear transceivers are still preferred in several practical applications.
Nowadays, most telecommunication specifications recommend the segmentation of data in blocks before starting the transmission. The resulting data blocks are usually transmitted separately in the so-called block-based transmission. Due to the characteristic of frequency selectivity inherent to broadband communications, there is a superposition of attenuated versions of the transmitted signal. This superposition, called intersymbol interference (ISI), is induced among the symbols that compose a given data block. The undesired superposition of signals also generates interblock interference (IBI) between adjacent transmitted data blocks.
The orthogonal frequency-division multiplexing (OFDM) is the most popular memoryless linear time-invariant (LTI) block-based transceiver that circumvents the IBI problem by inserting redundancy in the transmission. In addition, the redundancy leads to the elimination of ISI or the minimization of the mean-square error (MSE) of symbols at the receiver end. Whether the redundancy consists of cyclic prefix (CP) or zero padding (ZP), simple equalizer structures can always be induced. However, the OFDM has some drawbacks, such as high peak-to-average power ratio (PAPR), high sensitivity to carrier-frequency offset (CFO), and (possibly) significant loss on spectral efficiency due to the redundancy insertion.
Regarding the spectral-resource usage, the amount of redundancy employed in OFDM systems depends on the delay spread of the channel, implying that both transceivers waste the same bandwidth on redundant data. Nevertheless, there are many ways to increase the spectral efficiency of communication systems, such as by decreasing the overall symbol-error probability in the physical layer, so that less redundancy needs to be inserted in upper-layers by means of channel coding. In general, this approach increases the costs in the physical layer, since it leads to more computationally complex transceivers, hindering its implementation in some practical applications.
Other means to improve spectral efficiency are, therefore, highly desirable. Reducing the amount of transmitted redundancy inserted in the physical layer is a possible solution. Just few works had proposed decreasing the redundancy while constraining the transceiver to employ superfast algorithms. One of the most successful proposals relies on both the zero-padding (ZP) and the zero-jamming (ZJ) techniques to eliminate IBI employing a reduced amount of redundancy along with fast Fourier transform (FFT) algorithms.
Visible light communication (VLC)
Visible light communication (VLC) is a technique that employs visible light to transmit data. When compared to traditional radiofrequency (RF) communications, the key components that enable VLC to work are a light-emitting diode (LED), responsible for transforming electrical signals into light signals at the transmitter, and a photodiode, responsible for converting this optical signal into a corresponding current level at the receiver end. VLC can be employed in a large range of applications,such as: short range communication systems, working as a complement to or even a substitute for RF systems; intelligent transport systems, providing communications among vehicles; in the context of Internet-of-Things (IoT), where toys are communicating to each other using LEDs; and indoor localization systems. Considering these applications, VLC systems feature some advantages if compared to its RF counterparts, e.g., larger capactity, non-interference with RF waves, larger level of security and reuse, low cost of deployment, just to mention a few.
VLC has been proved to feature several advantages. Nevertheless, one of the main issue to overcome in this technology is the high nonlinear response due to the nonlinear current-voltage relation of LEDs. To address this problem, suitable nonlinear equalizers must be employed in order to have a reasonable bit error rate (BER) at the receiver. Other techniques, such as pre and post-distortion can also mitigate the nonlinear effects of a VLC system, but these techniques are designed based on the I-V curve of a given LED at the transmitter, which can be a hard task if multiple LEDs are employed, as in multiple-input multiple-output (MIMO) systems.
Underwater communication has been attracting much attention in recent years. For instance, it is expected that those communication systems play a critical role in the investigation of climatic changes, by monitoring seismic activities and biological changes that occur in the oceans. Underwater communication systems can also be used to perform remote maritime exploration. Some of those applications rely on video broadcasting, thus demanding high throughput communication systems.
In the aforementioned applications, the use of electromagnetic signals is prohibitive, since the attenuation in salty water is much larger than attenuation in air, calling for signals of a different nature, like acoustic signals. Indeed, acoustic signals are low-frequency mechanical waves, which are much less attenuated when propagating in an underwater environment. On the other hand, employing these signals is a very complicated task, since the underwater acoustic (UWA) noise is intense, and UWA channels feature strong time variations. Moreover, UWA communication is severely degraded by Doppler effects due to the low propagation speed of the acoustic waves and the ubiquitous relative motion between transmitter and receiver. Furthermore, UWA communication systems usually achieve low data rates, hindering their use in some applications.
Source localization and separation
The task of localizing sources is simultaneously very challenging and quite useful in a number of applications. Indeed, solving the source localization problem is a necessary step toward the proper implementation of many practical systems, such as video games, hearing aids, surveillance schemes, just to mention a few. As the name suggests, source localization algorithms focus on finding the active agents of the environment, namely the sources, which can play the role of a person, an undesired interferer, a loudspeaker (acoustic actuator), a jammer (RF actuator). The spatial information can be inferred directly from acquired signals (instead of intricate and computationally cumbersome modeling of the underlying acoustic environment) as long as they convey some sort of spatial diversity. The majority of source localization algorithms rely on the acquisition by multiple sensors of distinct versions of the emitted signals.
One of the drawbacks of source localization is a possible lack of synchronism among the sensors, which may hinder the use of TDOA-based algorithms. Another issue is the low SNR scenario, where the source signal is drastically attenuated by the environment. Considering acoustic source localization, the reverberation imposed by the environment is also an effect that should be addressed.
Source separation consists in separating the signal of interest from an undesired signal. Considering music applications, one may separate the piano sound from the voice of the singer. In the radiofrequency scenario, the signal from an interferer source (jammer) may impair the communication among the radio base stations and the receivers, which show the importance of source separation for this task.
Source separation is not always easy to be performed. Most of the techniques have high computational complexity and the separation may not be perfect. In other words, the signal of interest can still be contaminated by the undesired signal after separation, or even the signal of interest can be considerably ``clean'', but drastically damaged.
Spectrum sensing and Cognitive radio
Cognitive radio (CR) has emerged as a promising technique to deal with the current inefficient usage of limited spectra. In the CR context, users who have no spectrum licenses, also known as secondary users (SUs), are allowed to take advantage of temporarily unused licensed spectrum. In order to do that, SUs must sense which portions of the wireless spectrum are available, select suitable channels for their transmissions, manage the spectrum access with other SUs, and free those channels as long as primary users (PUs) ``request'' so. Within this process, the first step, referred as spectrum sensing (SS), is crucial for the proper functioning of a CR system.
The performance of SS techniques implemented by a single CR node is highly limited by local channel impairments. It is well-known that the most efficient way to improve such performance is to rely on cooperative spectrum sensing among nodes. Indeed, due to the (likely) independent statistical channel behavior measured across different node locations, one can increase the chances of correctly detecting vacancies (also known as white spaces) in the spectrum. Cooperation increases the reliability of detection by the diversity among SUs, besides transferring part of the detection computations to other secondary nodes or a fusion center (FC) with higher computational capacity, thus forming a distributed or centralized network, respectively.
There are many different approaches to tackle the cooperative spectrum sensing problem. From a practical point of view, the most attractive soft-combining techniques are those based on the linear combining of the estimates from distinct SUs in order to decide whether there are white spaces in the spectrum or not. Indeed, such an approach gathers the computational simplicity of linear combiners - a very desirable feature when dealing with power-constrained CR nodes - with incoherent energy detection---thus disregarding any coherence assumptions among PUs and SUs, another desirable feature in practice.