CSE 2023 - Keynotes
Home Organisation Committee Program Committee Important Dates Call for Papers Keynote Speeches Best Paper Award Journal Special Issues Contacts  
Keynote Speakers

Prof. Erol Gelenbe
FIEEE FACM MAE
Fellow of the French National Academy of Engineering
Fellow of the Science Academies of Belgium, Poland and Turkey
Honorary Fellow of the Hungarian and Islamic Academy of Sciences
Institute of Theoretical & Applied Informatics, Polish Academy of Sciences, & University Côte d'Azur I3S CNRS, 06100 Nice, France

Title:
Random Neural Networks (RNN) for Accurate CyberAttack Detection and Mitigation at the Edge

 

ABSTRACT: Even simple cyberattacks can impair the operation and performance of network systems substantially for many hours and sometimes days, and also increase the system's energy consumption. Their impact on data security, and the effects of the malware that they convey and install, are also well known. Thus there is a widespread need for accurate cyberattack detection, and rapid reaction and mitigation when attacks occur. On the other hand, the detection must avoid false alarms, to avoid impairing the smooth operation of a system which is not under attack. Thus considerable research has been conducted in this important field. Our presentation will briefly introduce the subject, and then focus on some recent results from the last 4-5 years, that are based on the Random Neural Network (RNN). The mathematical model will be described, and its extensions and deep learning algorithms will be discussed in the context of cyberattack detection and mitigation. The presentation will then focus on practical applications illustrated with different cyberattack datasets, showing the high accuracy and low false alarm rates that can be achieved. Measurements of active control schemes for attack mitigation will also be shown. Finally we will also show how the RNN can be used with Reinforcement Learning and SDN (Software Defined Networks), to dynamically control an Edge System that optimises Security, QoS and Energy Consumption.

Note. The talk will be based on our publications in the following journals and conferences: Proceedings of the IEEE (2020), Sensors (2021, 2023), ACM SIGCOMM Flexnets (2021), ICC (2016, 2022), IEEE Access (2022, 2023), Performance Evaluation (2022).

BIO: Erol Gelenbe ChEng FIEEE FACM held named personal chairs at NJIT (USA), Duke University (USA), University of Central Florida (USA), and Imperial College (UK). He served as Department Head at Duke University, Director of the School of EECS at UCF, and Dennis Gabor Chair & Head of Intelligent Systems and Networks (Imperial College). His research, which focuses on QoS, Security and Energy, was funded by Industry, DoD and NSF in the USA, EPSRC and MoD in the UK, and he has benefited from numerous EU FP5, FP6, FP7, and Horizon 2020 projects since 2003. Currently Professor at the Institute of Theoretical & Applied Informatics, Polish Academy of Sciences, he cooperates on research with the CNRS I3S Laboratory of University Côte d'Azur (Nice, France), and Yasar University (Izmir, Turkey). His current work is supported by grants from H2020 Horizon and UKRI. He is ranked among the top 25 PhD advisors by the American Mathematical Society Math. Genealogy Project, and has won the Grand Prix France Telecom 1996 (French Academy of Sciences), the ACM SIGMETRICS 2008 Life-Time Award, the 2008 Imperial College Rector's Research Award, the 2010 IET Oliver Lodge Medal (IET Innovation Award for Information Technology), and the Mustafa Prize 2017. He was awarded high honours of Commander of the Order of the Crown, Belgium (2022), Commander of the Order of Merit, France (2019), Knight of the Legion of Honour, France (2014), Commander of the Order of Merit, Italy (2005), Grand Officer of the Order of the the Star, Italy (2007). He is a Fellow of several national academies, and currently chairs the Informatics Section of Academia Europaea.


Prof. Kin K. Leung
EEE and Computing Departments
Imperial College, London
Web: www.commsp.ee.ic.ac.uk/~kkleung/

Title:
Machine Learning for Optimal Resource Allocation in Communication Networks and Computing Infrastructures

 

ABSTRACT: Optimization techniques are widely used to allocate and share limited resources to competing demands in communication networks and computing infrastructures. The speaker will start by showing the well-known Transport Control Protocol (TCP) on the Internet as a distributed solution to achieve the optimal allocation of network bandwidth. Unfortunately, factors such as multiple grades of service quality, variable transmission power, and tradeoffs between communication and computation often make the optimization problem for resource allocation non-convex. New distributed solution techniques are needed to solve these problems.

Gradient-based iterative algorithms are commonly used to solve these optimization problems. Much research focuses on improving the iteration convergence. However, when the system parameters change, it requires a new solution from the iterative methods. The speaker will present a new machine-learning method by using two Coupled Long Short-Term Memory (CLSTM) networks to quickly and robustly produce the optimal or near-optimal solutions to constrained optimization problems over a range of system parameters. Numerical examples for allocation of network resources will be presented to confirm the validity of the proposed method.

BIO: Kin K. Leung received his B.S. degree from the Chinese University of Hong Kong, and his M.S. and Ph.D. degrees from University of California, Los Angeles. He joined AT&T Bell Labs in New Jersey in 1986 and worked at its successor companies until 2004. Since then, he has been the Tanaka Chair Professor in the Electrical and Electronic Engineering (EEE), and Computing Departments at Imperial College in London. He serves as the Head of Communications and Signal Processing Group in the EEE Department at Imperial. His current research focuses on optimization and machine-learning techniques for system design and control of large-scale communications, computer and sensor networks. He also works on multi-antenna and cross-layer designs for wireless networks.

He is a Fellow of the Royal Academy of Engineering, IEEE Fellow, IET Fellow, and member of Academia Europaea. He received the Distinguished Member of Technical Staff Award from AT&T Bell Labs (1994) and the Royal Society Wolfson Research Merits Award (2004-09). Jointly with his collaborators, he received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize (2021), the IEEE ComSoc Best Survey Paper Award (2022), the U.S.–UK Science and Technology Stocktake Award (2021), the Lanchester Prize Honorable Mention Award (1997), and several best conference paper awards. He currently serves as the IEEE ComSoc Distinguished Lecturer (2022-23). He was a member (2009-11) and the chairman (2012-15) of the IEEE Fellow Evaluation Committee for the ComSoc. He has served as guest editor and editor for 10 IEEE and ACM journals and chaired the Steering Committee for the IEEE Transactions on Mobile Computing. Currently, he is an editor for the ACM Computing Survey and International Journal on Sensor Networks.


Dr. Shiqiang Wang
Staff Research Scientist
IBM T.J. Watson Research Center, United States
Homepage: http://shiqiang.wang

Title:
Towards Distributed MLOps: Theory and Practice

 

ABSTRACT: As machine learning (ML) technologies get widely applied to many domains, it has become essential to rapidly develop and deploy ML models. Towards this goal, MLOps has recently emerged as a set of tools and practices for operationalizing production-ready models in a reliable and efficient manner. However, several open problems exist, including how to automate the ML pipeline that includes data collection, model training, and deployment (inference) with support for distributed data and models stored at multiple edge sites. In this talk, I will cover some theoretical foundations and practical approaches towards enabling distributed MLOps, i.e., MLOps in large-scale edge computing systems. I will start with explaining the requirements and challenges. Then, I will describe how our recent theoretical developments in the areas of coreset, federated learning, and model uncertainty estimation can support distributed MLOps. As a concrete example, I will dive into the details of a federated learning algorithm with flexible control knobs, which adapts the learning process to accommodate time-varying and unpredictable resource availabilities, as often seen in systems in operation, while conforming to a given budget for model training. I will finish the talk by giving an outlook on some future directions.

BIO: Shiqiang Wang is a Staff Research Scientist at IBM T. J. Watson Research Center, NY, USA. He received his Ph.D. from Imperial College London, United Kingdom, in 2015. His current research focuses on the intersection of distributed computing, machine learning, networking, and optimization, with a broad range of applications including data analytics, edge-based artificial intelligence (Edge AI), Internet of Things (IoT), and future wireless systems. He received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize in 2021, IEEE ComSoc Best Young Professional Award in Industry in 2021, IBM Outstanding Technical Achievement Awards (OTAA) in 2019, 2021, 2022, and 2023, and multiple Invention Achievement Awards from IBM since 2016. For more details, please visit his homepage at: https://shiqiang.wang


Nektarios Georgalas
Senior and Principal Researcher
Manager for Innovation, Solutions Architecture and Technical Programme (IoT)
BT, UK

Title:
IoT Autonomics: Building the Autonomous IoT Environment of the Future

 

ABSTRACT: The Internet of Things is rapidly expanding at an unprecedented rate. Presently, there is an estimated over 15 billion connected IoT devices, which is predicted to increase by a factor of 2 by 2030. This results in IoT ecosystems with increasing complexity due to the sheer volume of sensors, variety of network connectivity, different IoT platforms and systems spanning from edge to cloud or originating from a plethora of vendors/hyperscalers. Managing complexity at this scale requires automation, since manual processes are inefficient. We are developing the Autonomous IoT. We engage AI and Machine Learning techniques in managing this complexity autonomously through self-initiated capabilities, where IoT ecosystems become self-serviced and self-managed. In this talk we will present the business case and motivation for IoT Autonomics, introduce the IoT Value Added Services layer, whose purpose is to deliver this intelligence in IoT Ecosystems transforming them to autonomously managed entities, contextualise our work with experiences/use-cases from a recent trial in Belfast Harbour and conclude with a deep dive into a few of these IoT VAS with live demos of the capabilities.

BIO: Nektarios Georgalas has 26 years in BT, as a Senior and Principal Researcher in BT Research and Network Strategy and more recently as a Manager for Innovation, Solutions Architecture and Technical Programme (IoT) in BT Digital. He currently leads the IoT programme in the BT Ireland Innovation Centre (BTIIC) where in collaboration with Universities, BT Research and BT Engineering teams, BT partners and customers he is driving the delivery and realisation of the Autonomous IoT vision to implement self-serviced and self-managed IoT Ecosystems by means of AI, machine learning, advanced analytics, Edge/Fog/Cloud Computing, IoT SLA management and optimisation. In his career, he pioneered several areas for BT leading to strategy, tools and architecture or platform interventions, with major driver always being value creation. He has been the BT director for two co-innovation programmes with BT partners delivering innovations in the areas of Cloud Services and Security, Data Centres, Network Virtualisation, Smart Cities, IoT and Mobility. He established and led two standards teams in the TeleManagement Forum where he also led multiple international consortia of major market players and vendors to deliver impactful Catalyst projects, awarded for excellence and best innovation, with influence on the telecoms market and the Forum's strategy towards a model-driven and software-defined ecosystem of digital services in dynamic marketplaces. Overall, his work has been recognised by 22 awards including the TMForum's "Excellence Award for Innovation" 2010, "Most Innovative Catalyst Award" 2014, "Best New Catalyst Award" 2015 and "Most Significant Contribution to Frameworx Award" 2015, “Most Innovative Catalyst – Smart X Commercial” 2016, “Outstanding Performance in the Catalyst Programme” 2017 and “Smart City Innovator of the Year” Excellence Award 2017. Other recognition accolades include Global Telecoms Business's "Business Service Innovation Award" 2010, 2012 and 2013. He has been Finalist in UK IT Industry Award for "Best IT Innovation" in 2013 and Highly Commended for the IET Innovation Award for Telecommunication in 2009. He has also achieved "Best innovation for Large Enterprise" and "Best Customer Experience Innovation" Finalists in BT Innovation Awards 2010. Nektarios has been recognised in BT's TSO "Brilliant People" 2015. For IEEE service Nektarios has been awarded 2 IEEE Outstanding Awards and 2 IEEE Outstanding Leadership Awards. Nektarios is inventor and co-inventor of 16 patents. He has been actively publishing in major high impact factor international IEEE Journals and Conferences, totalling more than 90 peer-reviewed papers. He has served as guest editor in IEEE journals on topics of IoT, Big Data and Data Science. He chaired 6 IEEE Conferences and frequently presents as invited Keynote Speaker. Finally, he has co-edited 6 Conference proceedings books.


Prof. Keqiu Li
IEEE Fellow
Tianjin University, China

Title:
Blockchain Technology and System

 

ABSTRACT: This talk starts with the brief introduction of blockchain and the milestones in development process. Through the analysis of cutting-edge blockchain technologies, this talk summarizes the critical challenges in the blockchain research area. Furthermore, this talk presents a blockchain system named Haihe-Smart-Chain developed by the research group, and key techniques involved in it. Finally, this talk discusses the future directions of blockchain.

BIO: Keqiu Li is a professor and dean of the College of Intelligence and Computing, Tianjin University, China. He is the recipient of National Science Foundation for Distinguished Young Scholars of China. He received his bachelor‘s and master’s degrees from the Department of Applied Mathematics at the Dalian University of Technology in 1994 and 1997, respectively. He received the Ph.D. degree from the Graduate School of Information Science, Japan Advanced Institute of Science and Technology in 2005. He keeps working on the topics of blockchain system, mobile computing, datacenter, and cloud computing. He has more than 150 papers published on prestigious journals or conferences such as TON, TPDS, TC, TMC, MobiCom, INFOCOM, ICNP, etc.


Prof. Liangxiu Han
Co-Director of Centre for Advanced Computational Science
Deputy Director of MMU Crime & Wellbeing Big Data Centre
Manchester Metropolitan University, UK
Homepage: http://www2.docm.mmu.ac.uk/STAFF/L.Han/

Title:
Scalable Deep Learning from Big Data

 

ABSTRACT: In recent years, deep learning has attracted much attention due to its nature in discovering correlation structure in data in an unsupervised fashion and has been applied into various domains such as in speech recognition and image classification, nature language processing and computer vision. In typical neural networks, it requires large-scale data to learn parameters (often reach to millions), which is a computationally intensive process and takes a lot of time to train a model. Scalable deep learning is therefore much needed, which can train complex models over a vast amount of data, allowing for optimal training performance in terms of computing time and accuracy. This talk will focus on the latest developments and real-world applications of scalable deep learning from big data.

BIO: Prof. Liangxiu Han has a PhD in Computer Science from Fudan University, Shanghai, P.R. China (2002). Prof. Han is currently a Professor of Computer Science at the Department of Computing and Mathematics, Manchester Metropolitan University. She is a co-Director of Centre for Advanced Computational Science and Deputy Director of ManMet Crime and Well-Being Big Data Centre. Han’s research areas mainly lie in the development of novel big data analytics/Machine Learning/AI, and development of novel intelligent architectures that facilitates big data analytics (e.g., parallel and distributed computing, Cloud/Service-oriented computing/data intensive computing) as well as applications in different domains (e.g. Precision Agriculture, Health, Smart Cities, Cyber Security, Energy, etc.) using various large scale datasets such as images, sensor data, network traffic, web/texts and geo-spatial data. As a Principal Investigator (PI) or Co-PI, Prof. Han has been conducting research in relation to big data/Machine Learning/AI, cloud computing/parallel and distributed computing (funded by EPSRC, BBSRC, Innovate UK, Horizon 2020, British Council, Royal Society, Industry, Charity, respectively, etc.).

Prof. Han has served as an associate editor/a guest editor for a number of reputable international journals and a chair (or Co-Chair) for organisation of a number of international conferences/workshops in the field. She has been invited to give a number of keynotes and talks on different occasions (including international conferences, national and international institutions/organisations).

Prof. Han is a member of EPSRC Peer Review College, an independent expert for European Commission proposal evaluation, and British Council Peer Review Panel.


Prof. Lu Liu
University of Leicester, UK

Title:
Enabling Artificial Intelligence of Things through Interdisciplinary AI and Data Science Research

 

ABSTRACT: In the era of the Internet of Things (IoT), an extensive network of interconnected physical devices spanning the globe continuously gathers and shares data. The convergence of IoT with cutting-edge Artificial Intelligence (AI) is giving rise to a transformative wave of innovation. This synergy, known as Artificial Intelligence of Things (AIoT), is set to reshape our future in the realm of smart technologies. Professor Liu will introduce his interdisciplinary research in AI and Data Science, focused on catalysing the emergence of AIoT. His work encompasses foundational investigations within this domain, as well as a diverse range of applications, spanning AI's role in healthcare and Net Zero, and its utilization in commercial data analytics, social media data analytics and sustainable data centre workload analytics.

BIO: Professor Lu Liu is a Professor at the School of Computing and Mathematical Sciences with expertise in AI, Data Science, Sustainable Systems and the Internet of Things, focusing on developing trustworthy and sustainable systems based on machine learning for health, Net Zero and digital manufacturing. Professor Liu received his PhD degree from Surrey Space Centre at the University of Surrey. He had worked as a Research Fellow at the WRG e-Science Centre at the University of Leeds. Professor Liu has over 250 scientific publications in reputable journals and international conferences. Professor Liu has secured over 30 grants which are supported by UKRI/EPSRC, Innovate UK, Royal Society, British Council and leading industries (e.g. BT, Royce-Royce, CGI). He received the Staff Excellence Award in Doctoral Supervision in 2018. He has been the recipient of 7 Best Paper Awards from international conferences and was invited to deliver 8 keynote speeches at international conferences. Professor Liu is currently the University Turing Liaison (Academic) for the Turing University Network (The Alan Turing Institute) at the University of Leicester.

 

Sponsors

Copyright© CSE-2023. Created and Maintained by CSE-2023.