Projects

RoboSAPIENS

Horizon Europe 2021-2027 project

Robotic Safe Adaptation In Unprecedented Situations (acronym: RoboSAPIENS) is a research project which was launched at January 1st 2024 and is funded by the Horizon Europe 2021-2027 research and innovation programme under grant agreement No 101133807. The project is coordinated by Aarhus University of Denmark and it will be running throughout the period January 2024 – December 2026.  Information: The robots of tomorrow will be endowed with the ability to adapt to drastic and unpredicted changes in their environment including humans. Such adaptations can however not be boundless: the robot must stay trustworthy, i.e. the adaptations should not be just a recovery into a degraded functionality. Instead, it must be a true adaptation, meaning that the robot will change its behavior while maintaining or even increasing its expected performance, and stays at least as safe and robust as before. RoboSAPIENS will focus on autonomous robotic software adaptations and will lay the foundations for ensuring that such software adaptations are carried out in an intrinsically safe, trustworthy and efficient manner, thereby reconciling open-ended self-adaptation with safety by design. RoboSAPIENS will also transform these foundations into ‘first time right’-design tools and robotic platforms, and will validate and demonstrate them up to TRL4. To achieve this over-all goal, RoboSAPIENS will extend the state of the art in four main objectives. 1. It will enable robotic open-ended self-adaptation in response to unprecedented system structural and environmental changes. 2. It will advance safety engineering techniques to assure robotic safety not only before, during and after adaptation. 3. It will advance deep learning techniques to actively reduce uncertainty in robotic self-adaptation. 4. It will assure trustworthiness of systems that use both deep-learning and computational architectures for robotic self-adaptation. To realise these objectives, RoboSAPIENS will extend techniques such as MAPE-K (Monitor, Analyze, Plan, Execute, Knowledge) and Deep Learning to set up generic adaptation procedures and also use an SSH dimension. RoboSAPIENS will demonstrate this trustworthy robotic self-adaptation on four industry-scale use cases centered around an industrial disassembly robot, a warehouse robotic swarm, a prolonged hull of an autonomous vessel, and human-robotic interaction.

DeepLET

HFRI and EU project

Energy Efficient and Trustworthy Deep Learning (acronym: DeepLET) is a research project which was launched on 4th December 2023 and is funded by the Hellenic Foundation for Research & Innovation (H.F.R.I) and European Eunion under the call “Basic Research Financing (Horizontal support for all Sciences), National Recovery and Resilience Plan (Greece 2.0)”. The project which will be running for two years, is coordinated by the Computational Intelligence and Deep Learning (CIDL) research group of the Aristotle University of Thessaloniki, while one more partner, Aarhus University of Denmark, will contribute to the successful implementation of the project. Information: Deep Learning (DL) has achieved tremendous performance jumps in the last decade in several computer vision and machine learning tasks. However, DL models are becoming more and more complex, requiring vast amounts of computational power and energy both for training and inference purposes. These requirements are becoming especially limiting in many applications, where significant energy and computational power constraints exist, restricting the speed and the accuracy of the deployed models, while the large-scale deployment of DL can have a significant environmental impact and comes with a rising energy cost. At the same time, DL typically leads to black-box models that cannot provide any explanation for the reasons for which they took a specific decision. This limitation is especially important for many performance critical applications, in which the wrong decisions made by the model can cause immediate harm, e.g., in autonomous driving. DeepLET aims to overcome these limitations by developing novel methods along with the appropriate theory for designing, training and deploying lightweight and energy efficient DL models. DeepLET also aims to develop trustworthy and explainable deep learning models that will allow for increasing the confidence in the way DL models work, e.g., interpreting the way DL models behave, increasing the trust in DL models, as well as identifying situations where DL models must not be trusted. Finally, DeepLET aims to demonstrate the actual efficiency of the developed energy efficient, lightweight and trustworthy DL models by providing a collection of open source tools (in the form of an open source library) to the research community and industry, enabling them to use, adapt and extend the developed methods. DeepLET is a high-risk high-gain project that goes beyond the current state-of-the-art, aiming to solve an important problem that currently prohibits DL from providing effective solutions for various applications.

INTERSOC

EU project

INTERconnected Security Operation Centres (acronym: INTERSOC) is a research project which was launched on 1st January 2024 and is funded by European Eunion under the programme DIGITAL Europe Programme. Information: INTERSOC envisions to improve disruption preparedness, resilience of digital infrastructures, and capacity building, through advanced threat forecasting, cyber-incident detection and response capabilities, at national and EU level, and dedicated training sessions in digital infrastructure security, while respecting privacy and other fundamental rights. To achieve this, INTERSOC will design and develop a user-centric intelligent threat defence and decision support platform by uniquely combining: 1. Highly sophisticated network and system behavioural monitoring, towards identification of anomalies caused by novel multi-faceted attacks. This would be achieved by enhancing the traditional SIEMs and IDS with behavioural and decisional Artificial Intelligence (AI) algorithms. 2. A low-code approach to security orchestration and incident management automation. 3. Decentralised, confidential Cyber Threat Information (CTI) sharing based on peer-to-peer networks and in compliance with the EU regulatory framework, 4. Trust models and trustworthy technology fine-tuned to address trust relationships when sharing information over the internet. 5. Risk and threat analysis, impact assessment and risk treatment to identify, analyse and eliminate security threats and vulnerabilities of the pilot systems. 6. Enhanced penetration tools and methodologies tackling emerging vulnerabilities. The tools will be used to actively test and attack the security of SOCs on the pilot systems. 7. Cutting-edge Trustworthy AI algorithms will be meticulously developed, taking into account the current evolving EU regulatory framework (e.g. proposal for AI act) and standard working groups (e.g. CEN/CLC JTC21 WG4). 8. The utilization of a cyber-range-type Virtualization Platform will facilitate the deployment and hosting of advanced red/blue team exercises, fostering capacity building and enhancing user awareness. Three diverse sectors (banking, energy, CSIRT training) over a set of use-cases.

OpenDR

EU H2020 project

Open Deep Learning Toolkit for Robotics (acronym: OpenDR) is an H2020 research project which was launched at January 1st 2020 and aims to develop a modular, οpen and non-proprietary toolkit for core robotic functionalities by harnessing deep learning to provide advanced perception and cognition capabilities, meeting in this way the general requirements of robotic applications in the areas of healthcare, agri-food and agile production. The OpenDR project is coordinated by Prof. Anastasios Tefas at Aristotle University of Thessaloniki, it will be running throughout the period of January 2020 to December 2023 and in total there are 8 partners from 7 countries participating in the project.

Visit OpenDR website >


PlasmoniAC 

EU H2020 project

PlasmoniAC is a 3-year long H2020 research project aiming to release a whole new class of energy- and size-efficient feed-forward and recurrent artificial plasmonic neurons with up to 100 GHz clock frequencies and 1 and 6 orders of magnitude better energy- and footprint-efficiencies, comparing to the current electronics-based state-of-the art. It adopts the best-in-class material and technology platforms for optimizing computational power, size and energy at every of its constituent functions, harnessing the proven high-bandwidth and low-loss credentials of photonic interconnects together with the nm-size memory function of memristor nanoelectronics, bridging them by introducing plasmonics as the ideal technology for offering photonic-level bandwidths and electronic-level footprint computations within ultra-low energy consumption envelopes. In a holistic hardware/software co-design approach, PlasmoniAC will follow the path from technology development to addressing real application needs by developing a new set of DL training models and algorithms and embedding its new technology into ready-to-use software libraries.

Visit PlasmoniAC website >

DeepLight

HFRI project

Photonic Neuromorphic Hardware for Deep Learning Applications over Light-enabled Integrated Systems – DeepLight

In recent years, Deep Learning (DL) achieved state-of-the-art performance in several challenging tasks, ranging from self-driving cars, and reinforcement learning agents that outperform humans in several games, to medical diagnosis and research. DL models are composed of several non-linear neural network layers, where each of these layers analyzes the output of the previous layer allowing the network to capture increasing complex concepts and model non-linear phenomena as the depth of the network increases. Even though these architectures allowed for tackling several difficult tasks with great success they come with a significant drawback: tremendous amounts of computing power are needed for training and deploying DL models. State-of-the-art DL models use millions of parameters and require several GFLOPS (Floating Point Operations per Second) for the inference process. This fact, together with the increasing demand for consumer deep learning applications (e.g., self-driving cars, intelligent mobile assistants, etc) led to the development of specialized hardware and software platforms for deploying DL models. To this end, Graphics Processing Units (GPUs) , and Tensor Processing Units (TPUs), are usually used. State-of-the-art enterprise-grade accelerators (NVIDIA Tesla V100) are capable of achieving 15.7 TFLOPS (single precision arithmetic) using 300 Watts of power. For inference- only applications, i.e., not training the DL model, more efficient platforms also exist: NVIDIA Tesla P4 for servers (5.5 Tflops using less than 75 Watts), and NVIDIA Jetson TX2 for embedded applications (about 1.5 TFLOPS using less than 15 Watts).

However, we are still far from fulfilling the needs of modern DL models that are held back by computational power limitations and/or energy constraints. This is where DeepLight steps-in in order to invest in photonics towards offering a radically new DL-enabling platform! DeepLight aims to transform the state-of-the-art and highly energy-efficient optical interconnect technology into a powerful DL technology, deploying the necessary hardware photonic infrastructure and DL models and following right from the beginning a tight hardware- software co-design approach. DeepLight has aligned its transformative character along the most energy efficient integrated photonic building blocks that can, however, ensure a successful transition to a powerful neuromorphic photonic portfolio.

Visit DeepLight website >

DeepFinance                                                                                             

EU and National EYDE-ETAK project

Deep Learning for Intelligent Financial Portfolio Management Leveraging Semantic Analysis of Social Media project – DeepFinance aims to develop a complete platform for semantic and sentiment analysis from social media streams using Deep Learning. This platform can be then used to further develop unified tools for financial portfolio management that can effectively fuse multi-modal information that is extracted from various sources (including social media streams). More specifically, DeepFinance aims to:

  • Develop deep learning tools for automated portfolio management, aiming to achieve better performance (according to various financial metrics) compared to the currently used strategies that mainly consist of handcrafted decision rules. Research will focus on developing robust deep learning methods. Furthermore, DeepFinance will research on developing robust agents, using deep learning and deep reinforcement learning methods, aiming to directly maximize the profit, as well as using novel price control strategies, e.g., directly placing limit orders, where the agent decides for the quantity, price and time to place an order at the
    same time.
  • Develop a platform for semantic analysis of social media streams, in order to provide semantic and sentiment analysis services for specific stocks, financial indices, etc.  The developed platform will integrate state-of-the-art deep learning and natural language processing tools, allowing for semantic and sentiment analysis from heterogeneous data streams. At the same time, the developed platform will allow for further finding cause-effect correlations between various events, providing an additional tool which DS can integrate in its products.
  • Integrate the semantic and sentiment analysis services in order to develop portfolio management products that take into account the information, regarding the market’s sentiment, that can be extracted from social media. Develop and integrate multi-modal deep learning and deep reinforcement learning methods for portfolio management that take into account additional heterogeneous information (limit order book, stock/index prices, sentiment, etc.).

The project has been Co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code: Τ2EDK-02094)

 

MALENA

EU and National EYDE-ETAK project

The aim of Machine Learning System for Energy Data Analysis and Management project – MALENA is to develop innovative and state-of-the-art software tools covering PPC’s participation needs in the aforementioned markets, introduce innovation and expertise within the company, and provide PPC with a software solution that sets the company free from license restrictions and third-party software companies. At the same time, the provision of a personalized service to consumers, giving them integrated access to their energy data and allowing them to manage their consumption, broadens PPC’s services portfolio with the integration of current trends in the energy sector. Development of innovative load and RES prediction algorithms will lead to the creation, for the first time in the Greek Energy Market, of an integrated management tool, which aspires to become a reference for all other participants in the market. In order to pursue all these objectives, deep neural networks and multi-target prediction techniques will be utilized. The project investigates two main research paths for forecasting multiple future values of time series: a) deep neural networks, b) multi-target prediction. On one hand, deep neural networks have revolutionized machine learning during the past years achieving top results for unstructured data (images, video, audio, text). On the other hand, multi-target prediction techniques allow the exploitation of algorithms – such as the extreme gradient boosting algorithm, which achieve top results in tasks with structured data involving multiple target variables, such as load forecasting. The project studies the application of the aforementioned techniques in time series of energy and weather data. In addition, the project studies graph mining techniques and methods for scaling up machine learning to data streams using graphics processing units and cloud computing.

Results: After a detailed study of end user requirements and the respective  specifications, the project generated an integrated  software prototype with a user-friendly web interface that can collect data (power generation and consumption data, meteorological data) and provide a) continuous load forecasts b) personalized load management and c) renewable energy sources power generation forecasts. The system as a whole as well as its sub-systems were thoroughly evaluated with respect to their functionality and results. Moreover, novel deep learning methods were introduced and some of them were incorporated in the prototype software. More specifically, novel methods were developed for short term and day ahead electric load demand forecasting as well as wind energy generation day ahead forecasting. The methods utilized diverse approaches such as tree-based ensembles, lightweight neural networks, anchored input-output learning, online self distilation, residual error learning etc.  The research outcomes of the project were disseminated through in 11 scientific conference papers.

The project has been Co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code: Τ2EDK-03048)

Visit MALENA website >

DeepStream

Action “Investment Plans of Innovation” of the Operational Program “Central Macedonia 2014 2020”

The aim of Semantic Annotation and Metadata Enrichment of Open Video Streams Using Deep Learning project – DeepStream is to develop an integrated and flexible platform for automatic semantic annotation of video streams, using deep learning methods, able to support learning with a few examples of education and continuous interaction with its users, in order to enable easy adapting to their needs. The proposed platform will be a useful tool for the analysis of freely available media with various applications in market analysis, advertising, campaign planning, journalism. The user will be able to create their own machine learning models directly in a Media Asset Management System harnessing active deep learning techniques. The platform will allow the easy collection, analysis and visualization of large data, harnessing visual communication techniques for the efficient transmission and cognition of information.

Visit DeepStream website >

   

Lightweight Deep Learning Models for Signal and Information Analysis

EU and National (EPAEK-EDBM)

The main research objective of this project is the design, training and implementation of deep learning (DL) models which will be efficient towards the use of computing resources, memory and energy (lightweight models) using strict mathematical principles to transfer knowledge from complex and slow DL models to lightweight DL models. Indicatively, the following objectives are defined: development of a probabilistic foundation of neural representations and use of Probability and Information Theory measures to transfer knowledge from one network to another, study of deep learning models as continuously varying systems, which are no longer treated as “black boxes” etc., but as systems in which all their individual parameters are controlled in a mathematically sound way, and application of the methodologies that will be developed for the training lightweight DL models for various applications for signal and information analysis.


MULTI-FORESEE

COST and EU project (Action CA16101)

The main objective of this Action, entitled MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE) – tools for Forensic Science, is to promote innovative, multi-informative, operationally deployable and commercially exploitable imaging solutions/technology to analyse forensic evidence.Forensic evidence includes, but not limited to, fingermarks, hair, paint, biofluids, digital evidence, fibers, documents and living individuals. Imaging technologies include optical, mass spectrometric, spectroscopic, chemical, physical and digital forensic techniques complemented by expertise in IT solutions and computational modelling. Imaging technologies enable multiple physical and chemical information to be captured in one analysis, from one specimen, with information being more easily conveyed and understood for a more rapid exploitation. The enhanced value of the evidence gathered will be conducive to much more informed investigations and judicial decisions thus contributing to both savings to the public purse and to a speedier and stronger criminal justice system. The Action will use the unique networking and capacity-building capabilities provided by the COST framework to bring together the knowledge and expertise of Academia, Industry and End Users. This synergy is paramount to boost imaging technological developments which are operationally deployable.

Visit MULTI-FORESEE website >

G2NET

COST and EU project (Action CA17137)

A network for Gravitational Waves, Geophysics and Machine Learning – The breakthrough discovery of gravitational waves on September 14, 2015 was made possible through synergy of techniques drawing from expertise in physics, mathematics, information science and computing.  At present, there is a rapidly growing interest in Machine Learning (ML), Deep Learning (DL), classification problems, data mining and visualization and, in general, in the development of new techniques and algorithms for efficiently handling the complex and massive data sets found in what has been coined “Big Data”, across a broad range of disciplines, ranging from Social Sciences to Natural Sciences. The rapid increase in computing power at our disposal and the development of innovative techniques for the rapid analysis of data will be vital to the exciting new field of Gravitational Wave (GW) Astronomy, on specific topics such as control and feedback systems for next-generation detectors, noise removal, data analysis and data-conditioning tools.The discovery of GW signals from colliding binary black holes (BBH) and the likely existence of a newly observable population of massive, stellar-origin black holes, has made the analysis of low-frequency GW data a crucial mission of GW science. The low-frequency performance of Earth-based GW detectors is largely influenced by the capability of handling ambient seismic noise suppression. This Cost Action aims at creating a broad network of scientists from four different areas of expertise, namely GW physics, Geophysics, Computing Science and Robotics, with a common goal of tackling challenges in data analysis and noise characterization for GW detectors.

Visit G2NET website >

Computational Intelligence

General Secretariat for Research and Technology (GSRT) – Hellenic Foundation for Research and Innovation (HFRI)

Recent advances in Deep Learning (DL) have provided powerful tools for various data analysis tasks. However, DL methods suffer from high complexity, hindering their successful application when limited computational resources are available, while combining them with traditional machine learning methods is not always a straightforward task, further limiting their flexibility. The primary focus of this project is to develop DL methods that can tackle a wide range of different data analysis tasks (classification, clustering, regression, etc.) using any kind of data (image, video, text, time series), while overcoming the aforementioned limitations. To this end, a wide variety of methods are considered, ranging from a neural generalization of the Bag-of-Features model to employing efficient and flexible knowledge transfer methods that can effectively distill the knowledge encoded in a large and complex neural network into a smaller and faster one.

  

Deep Learning Techniques in Digital Media

General Secretariat for Research and Technology (GSRT) – Hellenic Foundation for Research and Innovation (HFRI)

The research objective of the project is to develop deep learning techniques for effectively addressing fundamental digital media analysis problems. In the context of the project, deep representation learning methods for producing efficient representations for the problem of content based image retrieval were developed, including fully unsupervised retraining methods, retraining with class information, and relevance feedback based retraining. Furthermore, an online self distillation method was developed, allowing for training efficient lightweight models in generic classification problems.

  

Deep Learning Techniques

State Scholarship Foundation (IKY)

In recent years, Deep Learning (DL) has managed to surpass the performance of classic Machine Learning algorithms in many complex problems such as image processing. However, in fields such as time series analysis, DL has yet to show the same performance gains compared to classic methods. The primary subject of this project is to examine and develop DL methods for the analysis of time series with a focus on financial time series. The performance of existing DL methods in this modality is heavily dependant on the preprocessing of the available data, severely impacting their usefulness, so one of the goals of this project is to study and develop better preprocessing techniques. Another important subject undertaken is the development of performant DL models and the study of interactions that arise between agents, in fields such as Deep Reinforcement Learning.