about me
I am a post-doctoral research assistant at Università degli Studi di Milano in the Applied Intelligent System Lab (AISLAB), currently working for the H2020 European Project Essence and previously working on the H2020 European Project MoveCare.

I got a Ph.D in Artificial Intelligence and Robotics from Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria at the Artificial Intelligence and Robotics Lab (AIRLab). My advisor was prof. Francesco Amigoni. During my Ph.D. I focused my reserach on how to extract and use semantic knowledge about the structure of an indoor environments that can be used for increase the ability of an autonomous mobile robot to interact and unterstand with a previously unknown environment. In order to do so, I have explored several methods, from Statistical Relational Learning to Graph Learning. Further details can be found in my Ph.D. thesis , which has been defended in early 2017.

I received a Master of Science "cum laude" in Computer Engineering in 2012 at Politecnico di Milano, developing a semantic mapping module for a team of autonomous robots exploring an unknown indoor environment.

I was part of the team PoARet, winner of the RoboCup Virtual Robot Competition Rescue Simulation League 2012, held in Mexico City.

I received a Bachelor of Science "cum laude" in Computer Engineering in 2010 at Politecnico di Milano, with a thesis on 3D SLAM for a mobile robot using information obtained by laser scan and visual landmarks.

My research interests involve semantic mapping for autonomous mobile robots in indoor environments, with a particular attention on the analysing the structural properties of buildings and service robotics, with a particular focus towards long-term deployment of autonomous service robot in domestic environments.
My goal is to identify the main features that characterize an indoor environment, in order to provide a human-like comprehension of the structural properties of buildings for an autonomous robots.

Reasoning about the structure of buildings for autonomous mobile robotics

Autonomous mobile robots can perform many different tasks to help humans during their activities or to replace them in hazardous environments and simple routine operations. When we consider indoor tasks, robots have to interact with environments that are specifically designed for human activities, buildings. Buildings are strongly structured environments that are organized in regular patterns. For instance, rooms typically have a geometrical structure that is characterized by features, such as walls perpendicular to the floor and to the ceiling, and by a layout that can be, in most cases, approximated by a box-like model. To increase their ability to autonomously operate in indoor environments, robots must have a good understanding of buildings, in a way similar to the one that human beings exploit during their everyday activities. If we consider how people and robots interact with indoor environments, it can be said that people naturally understand and “read" buildings as human-made environments (and act in them accordingly) and that this is hardly the case for autonomous mobile robots. Typically, the interaction between a robot and its environment is heavily based on data acquired with perception. Such data are used for constructing metric and semantic maps of the environment, where the former are used to represent the occupancy and the free space perceived by the robot, and the latter are abstract representations built on top of metric maps that aim to represent the meaning of parts of the perceived environment to provide robots a human-like understanding of their surroundings. Mapping methods usually provide reliable knowledge only on parts of the environment that have been already visited. This approach often implies that what has not been seen by the robot does not exist, adopting, in a sense, a closed world assumption on the environment. This form of interaction with the environment is radically different from that of humans, who can easily navigate and comprehend the structure of buildings even without having seen them before. Our main research interest moves from the consideration that the global structure of buildings could be exploited to increase the autonomous abilities of robots when operating in indoor environments. Our proposed framework aims at identifying and at overcoming the limitations in standard mapping methods by starting from two insights on indoor environments. At first, we consider an entire floor of a building as a single object, by identifying relations between different (and potentially unconnected) parts of the building, such as walls, which can be used to infer the possible structure of unobserved parts of the building, as unexplored rooms behind closed doors. Moreover, we consider each building in relation to other buildings with the same function. The function of a building is represented by the main activity that each building is designed for (e.g., an office, a school) and is captured by the concept of building type. The function of a building imposes its structure, its floor plan, and the structure of its rooms. This allows us to exploit the fact that each building, having a precise function, shares some structural features with all other buildings with the same purpose.

Assistive and collaborative autonomous mobile robotics

One of the long-term applications of autonomous mobile robots is to assist in the execution of daily activities, both at home and in the workplace. The tasks that could already be performed by autonomous mobile robots are numerous, such as providing guidance and instructions in large scale environments as museums or hospitals, providing stimulation and support to elders living at home, or function as collaborative robots (cobots) in an office environment. However, the long-term deployment of an autonomous robotic platform in a real-world scenario presents several issues dealing both with core abilities of the robot, as mapping and localization, and with advanced functionalities such as human-robot interaction, task planning, and autonomous decision making. Moreover, the interaction of a robotic platform with IoT-based smart environments could increase the set of the possible application of robotic platforms. Up to now, a proper methodology for testing and assessing the correct long-term and large-scale functioning of such robots is still largely missing. Our research aims to address several of the current limitations of assistive and collaborative autonomous mobile robots, by analysing and evaluating the robot performances in new and different environments, and by developing functionalities for assistive robots. Within the MoveCare project, we investigated how the creation of an Internet of Robotics Things (IoRT), based on the interplay between pervasive smart objects and autonomous robotic systems, can increase the performance and effectiveness of both and provide long-term support to the elder by creating novel innovative services. Moreover, we showed how long-term operational safety, reliability, and transparent human-robot interaction in long-term assistive robotics could be obtained by using a cloud-based architecture that allows us to monitor, to perform anomaly detection, and to explain the choices performed by the robot and the system in such context. Thanks to these contributions, we were able to collect data covering more than 1000 days of long-term and unsupervised use of socially assistive robots inside real elders' houses, data that could be used to foster new developments. Several contributions regarding those findings are in preparation or currently under review.


Università degli Studi di Milano
Dipartimento di Informatica
Laboratory of Applied Intelligent Systems (AISLab)
Via Celoria 18
20133 Milano Italy
Piano 4

mail: matteo.luperto at unimi.it
phone: +39 02 503 16328
git: https://github.com/goldleaf3i
linkedin: https://www.linkedin.com/in/matteo-luperto-80882254/
scholar: https://scholar.google.it/citations?user=CLZhSq8AAAAJ