Português English
Contato

Tese de Doutorado de Arthur Selle Jacobs


Detalhes do Evento


DEFESA DE TESE DE DOUTORADO

Aluno: Arthur Selle Jacobs
Orientador: Prof. Dr. Lisandro Zambenedetti Granville
Coorientador: Prof. Dr. Ronaldo Alves Ferreira

Título: Enabling Self-Driving Networks with Machine Learning
Linha de Pesquisa: Arquiteturas, Protocolos e Gerência de Redes e Serviço

Data: 10/10/2022
Horário: 9h
Local: Esta banca ocorrerá excepcionalmente de forma totalmente remota. Interessados em assistir a defesa poderão acessar a sala virtual através do link:  https://mconf.ufrgs.br/webconf/00205980

Banca Examinadora:
Prof. Dr. Burkhard Stiller (Universidade de Zurique)
Prof. Dr. Oscar Mauricio Caicedo Rendon (Universidade de Cauca)
Prof. Dr. Luciano Paschoal Gaspary (UFRGS)

Presidente da Banca: Prof. Dr. Lisandro Zambenedetti Granville

Abstract: As modern networks grow in size and complexity, they also become increasingly prone to human errors. This trend has driven both industry and academia to try to automate management and control tasks, aiming to reduce human interaction with the network and human-made mistakes. Ideally, researchers envision a network design that is not only automatic (i.e., dependent of human instructions) but autonomous (i.e., capable of making its own decisions). Autonomous networking has been a goal sought for years, with many different concepts, designs and implementations, but it was never fully realized, mainly due to technological limitations. Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) introduced a breath of fresh air into this concept, reemerging as the re-branded concept of self-driving networks, in view of its autonomous car counterparts. In broad terms, a self-driving network is an autonomous network capable of acting according to high-level intents from an operator and automatically adapting to changes in traffic and user behavior. To achieve that vision, a network would need to fulfill four major requirements: (i) understand high-level intents from an operator to dictate its behavior, (ii) monitor itself based on input intents, (iii) predict and identify patterns from monitored data and (iv) adapt itself to new behaviors without the intervention of an operator. As fulfilling the requirements of a self-driving network requires heavily relying on ML models to make decisions and classifications that directly impact the network, one particular issue becomes prominent with this design: trust. Applying ML to solve networking management tasks, such as the ones described above, has been a popular trend among researchers recently. However, despite the topic receiving much attention, industry operators have been reluctant to take advantage of such solutions, mainly because of the black-box nature of ML models which produce decisions without any explanation or reason as to which those decisions were made. Given the high-stakes nature of production networks, it becomes impossible to trust a ML model that may take system-breaking actions automatically, and most important to the scope of this thesis, a prohibitive challenge that must be addressed if a self-driving network design is ever to be achieved. The present thesis aims to enable self-driving networks by tackling the problem of the inherent lack of trust in ML models that empower it. To that end, we assess and scrutinize the decision-making process of ML-based classifiers used to compose a self-driving network. First, we investigate and evaluate the accuracy and credibility of classifications made by ML models used to process high-level intents from the operator. For that evaluation, we propose a novel conversational interface called LUMI that allows operators to use natural language to describe how the network should behave. Second, we analyze and assess the accuracy and credibility of ML models’ decisions to self-configure the network according to monitored data. In that analysis, we uncover the need to reinvent how researchers apply AI/ML to networking problems, so we propose a new AI/ML pipeline that introduces steps to scrutinize ML models using techniques from the emerging field of eXplainable AI (XAI). Finally, we investigate if there is a viable method to improve the trust of operators in the decision made by ML models that enable self-driving networks. Our investigation led us to propose a new XAI method to extract explanations from any given black-box ML model in the form of decision trees while maintaining a manageable size, which we called TRUSTEE. Our results show that ML models widely applied to solve networking problems have not been put under proper scrutiny and can easily break when put under real-world traffic. Such models, therefore, need to be corrected to fulfill their given tasks properly.

Keywords: Self-Driving Networks. Machine Learning. Explainability. Intent-Based Networking. Operator Feedback. Network Security.