EU H2020 project
Open Deep Learning Toolkit for Robotics (acronym: OpenDR) is a new H2020 research project which was launched at January 1st 2020 and aims to develop a modular, οpen and non-proprietary toolkit for core robotic functionalities by harnessing deep learning to provide advanced perception and cognition capabilities, meeting in this way the general requirements of robotic applications in the areas of healthcare, agri-food and agile production. The OpenDR project is coordinated by Prof. Anastasios Tefas at Aristotle University of Thessaloniki, it will be running throughout the period of January 2020 to December 2022 and in total there are 8 partners from 7 countries participating in the project.
EU H2020 project
PlasmoniAC is a 3-year long H2020 research project aiming to release a whole new class of energy- and size-efficient feed-forward and recurrent artificial plasmonic neurons with up to 100 GHz clock frequencies and 1 and 6 orders of magnitude better energy- and footprint-efficiencies, comparing to the current electronics-based state-of-the art. It adopts the best-in-class material and technology platforms for optimizing computational power, size and energy at every of its constituent functions, harnessing the proven high-bandwidth and low-loss credentials of photonic interconnects together with the nm-size memory function of memristor nanoelectronics, bridging them by introducing plasmonics as the ideal technology for offering photonic-level bandwidths and electronic-level footprint computations within ultra-low energy consumption envelopes. In a holistic hardware/software co-design approach, PlasmoniAC will follow the path from technology development to addressing real application needs by developing a new set of DL training models and algorithms and embedding its new technology into ready-to-use software libraries.
Photonic Neuromorphic Hardware for Deep Learning Applications over Light-enabled Integrated Systems – DeepLight
In recent years, Deep Learning (DL) achieved state-of-the-art performance in several challenging tasks, ranging from self-driving cars, and reinforcement learning agents that outperform humans in several games, to medical diagnosis and research. DL models are composed of several non-linear neural network layers, where each of these layers analyzes the output of the previous layer allowing the network to capture increasing complex concepts and model non-linear phenomena as the depth of the network increases. Even though these architectures allowed for tackling several difficult tasks with great success they come with a significant drawback: tremendous amounts of computing power are needed for training and deploying DL models. State-of-the-art DL models use millions of parameters and require several GFLOPS (Floating Point Operations per Second) for the inference process. This fact, together with the increasing demand for consumer deep learning applications (e.g., self-driving cars, intelligent mobile assistants, etc) led to the development of specialized hardware and software platforms for deploying DL models. To this end, Graphics Processing Units (GPUs) , and Tensor Processing Units (TPUs), are usually used. State-of-the-art enterprise-grade accelerators (NVIDIA Tesla V100) are capable of achieving 15.7 TFLOPS (single precision arithmetic) using 300 Watts of power. For inference- only applications, i.e., not training the DL model, more efficient platforms also exist: NVIDIA Tesla P4 for servers (5.5 Tflops using less than 75 Watts), and NVIDIA Jetson TX2 for embedded applications (about 1.5 TFLOPS using less than 15 Watts).
However, we are still far from fulfilling the needs of modern DL models that are held back by computational power limitations and/or energy constraints. This is where DeepLight steps-in in order to invest in photonics towards offering a radically new DL-enabling platform! DeepLight aims to transform the state-of-the-art and highly energy-efficient optical interconnect technology into a powerful DL technology, deploying the necessary hardware photonic infrastructure and DL models and following right from the beginning a tight hardware- software co-design approach. DeepLight has aligned its transformative character along the most energy efficient integrated photonic building blocks that can, however, ensure a successful transition to a powerful neuromorphic photonic portfolio.
EU and National EYDE-ETAK projectDeep Learning for Intelligent Financial Portfolio Management Leveraging Semantic Analysis of Social Media project – DeepFinance aims to develop a complete platform for semantic and sentiment analysis from social media streams using Deep Learning. This platform can be then used to further develop unified tools for financial portfolio management that can effectively fuse multi-modal information that is extracted from various sources (including social media streams). More specifically, DeepFinance aims to:
- Develop deep learning tools for automated portfolio management, aiming to achieve better performance (according to various financial metrics) compared to the currently used strategies that mainly consist of handcrafted decision rules. Research will focus on developing robust deep learning methods. Furthermore, DeepFinance will research on developing robust agents, using deep learning and deep reinforcement learning methods, aiming to directly maximize the profit, as well as using novel price control strategies, e.g., directly placing limit orders, where the agent decides for the quantity, price and time to place an order at the same time.
- Develop a platform for semantic analysis of social media streams, in order to provide semantic and sentiment analysis services for specific stocks, financial indices, etc. The developed platform will integrate state-of-the-art deep learning and natural language processing tools, allowing for semantic and sentiment analysis from heterogeneous data streams. At the same time, the developed platform will allow for further finding cause-effect correlations between various events, providing an additional tool which DS can integrate in its products.
- Integrate the semantic and sentiment analysis services in order to develop portfolio management products that take into account the information, regarding the market’s sentiment, that can be extracted from social media. Develop and integrate multi-modal deep learning and deep reinforcement learning methods for portfolio management that take into account additional heterogeneous information (limit order book, stock/index prices, sentiment, etc.).
EU and National EYDE-ETAK project
The aim of Machine Learning System for Energy Data Analysis and Management project – MALENA is to develop innovative and state-of-the-art software tools covering PPC’s participation needs in the aforementioned markets, introduce innovation and expertise within the company, and provide PPC with a software solution that sets the company free from license restrictions and third-party software companies. At the same time, the provision of a personalized service to consumers, giving them integrated access to their energy data and allowing them to manage their consumption, broadens PPC’s services portfolio with the integration of current trends in the energy sector. Development of innovative load and RES prediction algorithms will lead to the creation, for the first time in the Greek Energy Market, of an integrated management tool, which aspires to become a reference for all other participants in the market. In order to pursue all these objectives, deep neural networks and multi-target prediction techniques will be utilized. The project investigates two main research paths for forecasting multiple future values of time series: a) deep neural networks, b) multi-target prediction. On one hand, deep neural networks have revolutionized machine learning during the past years achieving top results for unstructured data (images, video, audio, text). On the other hand, multi-target prediction techniques allow the exploitation of algorithms – such as the extreme gradient boosting algorithm, which achieve top results in tasks with structured data involving multiple target variables, such as load forecasting. The project studies the application of the aforementioned techniques in time series of energy and weather data. In addition, the project studies graph mining techniques and methods for scaling up machine learning to data streams using graphics processing units and cloud computing.
Lightweight Deep Learning Models for Signal and Information Analysis
EU and National (EPAEK-EDBM)
The main research objective of this project is the design, training and implementation of deep learning (DL) models which will be efficient towards the use of computing resources, memory and energy (lightweight models) using strict mathematical principles to transfer knowledge from complex and slow DL models to lightweight DL models. Indicatively, the following objectives are defined: development of a probabilistic foundation of neural representations and use of Probability and Information Theory measures to transfer knowledge from one network to another, study of deep learning models as continuously varying systems, which are no longer treated as “black boxes” etc., but as systems in which all their individual parameters are controlled in a mathematically sound way, and application of the methodologies that will be developed for the training lightweight DL models for various applications for signal and information analysis.
COST and EU project (Action CA16101)The main objective of this Action, entitled MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE) – tools for Forensic Science, is to promote innovative, multi-informative, operationally deployable and commercially exploitable imaging solutions/technology to analyse forensic evidence.Forensic evidence includes, but not limited to, fingermarks, hair, paint, biofluids, digital evidence, fibers, documents and living individuals. Imaging technologies include optical, mass spectrometric, spectroscopic, chemical, physical and digital forensic techniques complemented by expertise in IT solutions and computational modelling. Imaging technologies enable multiple physical and chemical information to be captured in one analysis, from one specimen, with information being more easily conveyed and understood for a more rapid exploitation. The enhanced value of the evidence gathered will be conducive to much more informed investigations and judicial decisions thus contributing to both savings to the public purse and to a speedier and stronger criminal justice system. The Action will use the unique networking and capacity-building capabilities provided by the COST framework to bring together the knowledge and expertise of Academia, Industry and End Users. This synergy is paramount to boost imaging technological developments which are operationally deployable. Visit MULTI-FORESEE website >
COST and EU project (Action CA17137)
A network for Gravitational Waves, Geophysics and Machine Learning – The breakthrough discovery of gravitational waves on September 14, 2015 was made possible through synergy of techniques drawing from expertise in physics, mathematics, information science and computing. At present, there is a rapidly growing interest in Machine Learning (ML), Deep Learning (DL), classification problems, data mining and visualization and, in general, in the development of new techniques and algorithms for efficiently handling the complex and massive data sets found in what has been coined “Big Data”, across a broad range of disciplines, ranging from Social Sciences to Natural Sciences. The rapid increase in computing power at our disposal and the development of innovative techniques for the rapid analysis of data will be vital to the exciting new field of Gravitational Wave (GW) Astronomy, on specific topics such as control and feedback systems for next-generation detectors, noise removal, data analysis and data-conditioning tools.The discovery of GW signals from colliding binary black holes (BBH) and the likely existence of a newly observable population of massive, stellar-origin black holes, has made the analysis of low-frequency GW data a crucial mission of GW science. The low-frequency performance of Earth-based GW detectors is largely influenced by the capability of handling ambient seismic noise suppression. This Cost Action aims at creating a broad network of scientists from four different areas of expertise, namely GW physics, Geophysics, Computing Science and Robotics, with a common goal of tackling challenges in data analysis and noise characterization for GW detectors.