Current Research Topics
Transceivers With Reduced Redundancy
A significant part of physical- and link-layer research in communication systems focuses on either developing new methods or enhancing the existing ones in order to increase throughput. From a practical point of view, these investigations should always take into account the fundamental trade-off between performance gains and cost effectiveness. The computational complexity is amongst the factors that directly affects the cost effectiveness of new advances in communications. This explains why linear transceivers are still preferred in several practical applications.
Nowadays, most telecommunication specifications recommend the segmentation of data in blocks before starting the transmission. The resulting data blocks are usually transmitted separately in the so-called block-based transmission. Due to the characteristic of frequency selectivity inherent to broadband communications, there is a superposition of attenuated versions of the transmitted signal. This super- position, called intersymbol interference (ISI), is induced among the symbols that compose a given data block. The undesired superposition of signals also generates interblock interference (IBI) between adjacent transmitted data blocks.
The orthogonal frequency-division multiplexing (OFDM) is the most popular memoryless linear time-invariant (LTI) block-based transceiver that circumvents the IBI problem by inserting redundancy in the transmission. In addition, the redundancy leads to the elimination of ISI or the minimization of the mean-square error (MSE) of symbols at the receiver end. Whether the redundancy consists of cyclic prefix (CP) or zero padding (ZP), simple equalizer structures can always be induced. However, the OFDM has some drawbacks, such as high peak- to-average power ratio (PAPR), high sensitivity to carrier-frequency offset (CFO), and (possibly) significant loss on spectral efficiency due to the redundancy insertion.
The single-carrier with frequency-domain (SC-FD) equalization technique is an efficient way to reduce both PAPR and CFO as compared to the OFDM sys- tem. These advantages are attained without changing the overall complexity of the transceiver.
Regarding the spectral-resource usage, the amount of redundancy employed in both OFDM and SC-FD systems depends on the delay spread of the channel, implying that both transceivers waste the same bandwidth on redundant data. Nevertheless, there are many ways to increase the spectral efficiency of communication systems, such as by decreasing the overall symbol-error probability in the physical layer, so that less redundancy needs to be inserted in upper-layers by means of channel coding. In general, this approach increases the costs in the physical layer, since it leads to more computationally complex transceivers, hindering its implementation in some practical applications.
Other means to improve spectral efficiency are, therefore, highly desirable. Reducing the amount of transmitted redundancy inserted in the physical layer is a possible solution. Just few works had proposed decreasing the redundancy while constraining the transceiver to employ superfast algorithms. One of the most successful proposals relies on both the zero-padding (ZP) and the zero-jamming (ZJ) techniques to eliminate IBI employing a reduced amount of redundancy along with fast Fourier transform (FFT) algorithms.
Main Research Focus
This current research topic aims at proposing new structures for block-based transceivers with reduced redundancy. Such new structures must allow one to equalize the received data blocks efficiently. In other words, the structures are constrained to use only super-fast algorithms. Indeed, we want to employ only fast transforms, such as the discrete Fourier transform, along with one/two-tap equalizers in the transceiver structures in order to satisfy the aforementioned computational-complexity constraints.
A number of relevant issues must also be addressed related to the proposed structures, such as channel estimation, equalizer design, I/Q imbalance, CFO estimation/compensation, just to mention a few.
Localization of Acoustic Sensors and Sources
The task of localizing acoustic sources within 3-D Euclidean spaces is simultaneously very challenging and quite useful in a number of applications. Indeed, solving the so-called sound source localization (SSL) problem is a necessary step toward the proper implementation of many practical systems, such as video games, hearing aids, surveillance schemes, just to mention a few. As the name suggests, SSL algorithms focus on finding the active agents of the acoustic environment, namely the sound sources, which can play the role of a person, an undesired interferer, a loudspeaker (acoustic actuator), etc. The spatial information can be inferred directly from acquired signals (instead of intricate and computationally cumbersome modeling of the underlying acoustic environment) as long as they convey some sort of spatial diversity. The majority of SSL algorithms rely on the acquisition by multiple microphones (acoustic sensors) of distinct versions of the emitted signals. Among such algorithms, the ones based on generalized cross-correlation (GCC) or steered-response power (SRP) find widespread use in both practice and academic literature.
When compared to the SSL problem, a relatively lesser known problem is the so-called acoustic sensor localization (ASL), which can be regarded as the dual version of the SSL problem. Indeed, ASL algorithms focus on finding the passive agents of the acoustic environment, namely the acoustic sensors, which can play the role of a person carrying some device with built-in microphone, or simply a stand-alone microphone. A possible application of ASL algorithms falls within the context of audio-based proximity and/or position detection of individual mobile devices, such as mobile phones, tablets, laptops, and PCs acting as individual sensors with built-in microphones. Precise detection of proximity and position has numerous applications such as indoor navigation in public places (e.g. healthcare and retail stores). It is worth reinforcing that ASL differs from SSL in the sense that the sensors are localized instead of the sources. Besides, the sensors have unstructured configuration, as opposed to well-defined and somehow symmetric geometries often present in microphone arrays. Among ASL algorithms, the ones based on modifications/adaptations of GCC are by far the most employed.
Main Research Focus
This current research topic aims at proposing new algorithms for solving the SSL and ASL problems, taking into account practical issues, such as computational requirements, possible lack of synchronism between devices, non-linearities, just to mention a few.
Spectrum Sensing Using Cooperative Cognitive Radios
Cognitive radio (CR) has emerged as a promising technique to deal with the current inefficient usage of limited spectra. In the CR context, users who have no spectrum licenses, also known as secondary users (SUs), are allowed to take advantage of temporarily unused licensed spectrum. In order to do that, SUs must sense which portions of the wireless spectrum are available, select suitable channels for their transmissions, manage the spectrum access with other SUs, and free those channels as long as primary users (PUs) ``request'' so. Within this process, the first step, referred as spectrum sensing (SS), is crucial for the proper functioning of a CR system.
The performance of SS techniques implemented by a single CR node is highly limited by local channel impairments. It is well-known that the most efficient way to improve such performance is to rely on cooperative spectrum sensing among nodes. Indeed, due to the (likely) independent statistical channel behavior measured across different node locations, one can increase the chances of correctly detecting vacancies (also known as white spaces) in the spectrum. Cooperation increases the reliability of detection by the diversity among SUs, besides transferring part of the detection computations to other secondary nodes or a fusion center (FC) with higher computational capacity, thus forming a distributed or centralized network, respectively.
There are many different approaches to tackle the cooperative spectrum sensing problem. From a practical point of view, the most attractive soft-combining techniques are those based on the linear combining of the estimates from distinct SUs in order to decide whether there are white spaces in the spectrum or not. Indeed, such an approach gathers the computational simplicity of linear combiners - a very desirable feature when dealing with power-constrained CR nodes - with incoherent energy detection---thus disregarding any coherence assumptions among PUs and SUs, another desirable feature in practice.
Main Research Focus
This current research topic aims at proposing new algorithms for improving the performance of spectrum sensing using cooperative cognitive radios, taking into account limitations usually found in practical systems, such as limited power of handheld devices, incoherence among network nodes, and presence of noise/interference.
Data-Selective Adaptive Algorithms
This investigation addresses new adaptive filtering solutions for several equalization and system identification problems which usually arise in the broad areas of signals, multimedia, and telecommunications. We have proposed several set-membership affine projection algorithms incorporating selective updating mechanisms. However, there are some open research problems that need to be addressed in order to foster the adoption of data-selective algorithms in practical applications.
Main Research Focus
The application of data-selective algorithms in the context of spectrum sensing using cognitive radios or in the context of sparse system identification is currently being considered.