Publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2023
- Enhancing AUV Autonomy With Model Predictive Path Integral ControlPierre Nicolay, Yvan Petillot, Mykhaylo Marfeychuk, and 2 more authorsIn OCEANS 2023, 2023
Autonomous underwater vehicles (AUVs) play a crucial role in surveying marine environments, carrying out underwater inspection tasks, and ocean exploration. However, in order to ensure that the AUV is able to carry out its mission successfully, a control system capable of adapting to changing environmental conditions is required. Furthermore, to ensure the robotic platform’s safe operation, the onboard controller should be able to operate under certain constraints. In this work, we investigate the feasibility of Model Predictive Path Integral Control (MPPI) for the control of an AUV. We utilise a non-linear model of the AUV to propagate the samples of the MPPI, which allow us to compute the control action in real time. We provide a detailed evaluation of the effect of the main hyperparameters on the performance of the MPPI controller. Furthermore, we compared the performance of the proposed method with a classical PID and Cascade PID approach, demonstrating the superiority of our proposed controller. Finally, we present results where environmental constraints are added and show how MPPI can handle them by simply incorporating those constraints in the cost function.
- Observability-Aware Active Extrinsic Calibration of Multiple SensorsS. Xu, J.S. Willners, Z. Hong, and 3 more authors2023
The extrinsic parameters play a crucial role in multi-sensor fusion, such as visual-inertial Simultaneous Localization and Mapping(SLAM), as they enable the accurate alignment and integration of measurements from different sensors. However, extrinsic calibration is challenging in scenarios, such as underwater, where in-view structures are scanty and visibility is limited, causing incorrect extrinsic calibration due to insufficient motion on all degrees of freedom. In this paper, we propose an entropy-based active extrinsic calibration algorithm leverages observability analysis and information entropy to enhance the accuracy and reliability of extrinsic calibration. It determines the system observability numerically by using singular value decomposition (SVD) of the Fisher Information Matrix (FIM). Furthermore, when the extrinsic parameter is not fully observable, our method actively searches for the next best motion to recover the system’s observability via entropy-based optimization. Experimental results on synthetic data, in a simulation, and using an actual underwater vehicle verify that the proposed method is able to avoid the calibration failure while improving the calibration accuracy and reliability. © 2023 IEEE.
- Adaptive Heading for Perception-Aware Trajectory FollowingJ.S. Willners, S. Katagiri, S. Xu, and 3 more authors2023
This paper presents an adaptive heading approach for perception awareness during trajectory following. By adapting the heading of a robot to improve the feature tracking in the current mapped environment, the accuracy in localisation can be improved. This can have a significant advantage for autonomous operations in GPS-denied environments such as subsea or in caves. The aim of the proposed approach is to position the sensor used for perception and feature tracking in such a way that it; obtains a view that contains a good observation of the previously mapped environment, face forward along the direction of travel, reduces the change in heading and view the perceived environment along the surface’s estimated normals. These 4 objectives create a weighted utility function that is used to find the most beneficial heading. The benefit is a system that improves feature tracking for simultaneous localisation and mapping (SLAM) while considering the safety of the robot by being aware of its surrounding. To sense the environment, a simulated sensor is discretised to a set of vertical rays based on the vertical field of view. The vertical rays are swept 360 degrees around a position to evaluate for a new heading. This allows for the simulated sensor data from ray casting to be reused and therefore reduces the computational load to find the heading which maximises the utility function. The paper is focused on holonomic robots capable of controlling the robot’s heading or sensor orientation independently from the position. We present results and evaluation in a simulated environment where we show a great improvement in the SLAM’s pose estimation. In addition, we endow an autonomous underwater vehicle (AUV) with the proposed approach during field trials and present the result in two different environments. © 2023 IEEE.
- Large-Scale Radar Localization using Online Public MapsZ. Hong, Y. Petillot, K. Zhang, and 2 more authors2023
In this paper, we propose using online public maps, e.g., OpenStreetMap (OSM), for large-scale radar-based localization without needing a prior sensing map. This can potentially extend the localization system to anywhere worldwide without building, saving, or maintaining a sensing map, as long as an online public map covers the operating area. Existing methods using OSM only use route network or semantics information. These two sources of information are not combined in the previous works, while our proposed system fuses them to improve localization accuracy. Our experiments, on three open datasets collected from three different continents, show that the proposed system outperforms the state-of-the-art localization methods, reducing up to 50% of position errors. We release an open-source implementation for the community. © 2023 IEEE.
2022
- Online Mapping and Motion Planning under Uncertainty for Safe Navigation in Unknown EnvironmentsE. Pairet, J.D. Hernández, M. Carreras, and 2 more authorsIEEE Transactions on Automation Science and Engineering, 2022
Safe autonomous navigation is an essential and challenging problem for robots operating in highly unstructured or completely unknown environments. Under these conditions, not only robotic systems must deal with limited localization information but also their maneuverability is constrained by their dynamics and often suffers from uncertainty. In order to cope with these constraints, this article proposes an uncertainty-based framework for mapping and planning feasible motions online with probabilistic safety guarantees. The proposed approach deals with the motion, probabilistic safety, and online computation constraints by: 1) incrementally mapping the surroundings to build an uncertainty-aware representation of the environment and 2) iteratively (re)planning trajectories to goal that is kinodynamically feasible and probabilistically safe through a multilayered sampling-based planner in the belief space. In-depth empirical analyses illustrate some important properties of this approach, namely: 1) the multilayered planning strategy enables rapid exploration of the high-dimensional belief space while preserving asymptotic optimality and completeness guarantees and 2) the proposed routine for probabilistic collision checking results in tighter probability bounds in comparison to other uncertainty-aware planners in the literature. Furthermore, real-world in-water experimental evaluation on a nonholonomic torpedo-shaped autonomous underwater vehicle and simulated trials in an urban environment on an unmanned aerial vehicle demonstrate the efficacy of the method and its suitability for systems with limited onboard computational power. Note to Practitioners-Emergent robotic applications require operating in previously unmapped scenarios. This article presents a unified mapping-planning strategy that enables robots to navigate autonomously and safely in harsh environments. © 2004-2012 IEEE.
- Temporal Planning with Incomplete Knowledge and Perceptual InformationY. Carreno, Y. Petillot, and R.P.A. Petrick2022
In real-world applications, the ability to reason about incomplete knowledge, sensing, temporal notions, and numeric constraints is vital. While several AI planners are capable of dealing with some of these requirements, they are mostly limited to problems with specific types of constraints. This paper presents a new planning approach that combines contingent plan construction within a temporal planning framework, offering solutions that consider numeric constraints and incomplete knowledge. We propose a small extension to the Planning Domain Definition Language (PDDL) to model (i) incomplete, (ii) knowledge sensing actions that operate over unknown propositions, and (iii) possible outcomes from non-deterministic sensing effects. We also introduce a new set of planning domains to evaluate our solver, which has shown good performance on a variety of problems. © Y. Carreno, Y. Petillot, R. Petrick.
- AUV localisation: a review of passive and active techniquesF. Maurelli, S. Krupiński, X. Xiang, and 1 more authorInternational Journal of Intelligent Robotics and Applications, 2022
Localisation, i.e. estimation of one’s position in a given environment is a crucial element of many mobile systems, manned and unmanned. Due to the high demand for autonomous exploration, patrolling and inspection services and a rapid improvement of batteries, sensors and machine learning algorithms, the quality of localisation becomes even more important for smart robotic systems. The underwater domain is a very challenging environment due to the water blocking most of the signals over short distances. Recent results in localisation techniques for underwater vehicles are summarised in two principal categories: passive techniques, which strive to provide the best estimation of the vehicle’s position (global or local) given the past and current information from sensors, and active techniques, which additionally produce guidance output that is expected to minimise the uncertainty of estimated position. © 2021, The Author(s).
- RadarSLAM: A robust simultaneous localization and mapping system for all weather conditionsZ. Hong, Y. Petillot, A. Wallace, and 1 more authorInternational Journal of Robotics Research, 2022
A Simultaneous Localization and Mapping (SLAM) system must be robust to support long-term mobile vehicle and robot applications. However, camera and LiDAR based SLAM systems can be fragile when facing challenging illumination or weather conditions which degrade the utility of imagery and point cloud data. Radar, whose operating electromagnetic spectrum is less affected by environmental changes, is promising although its distinct sensor model and noise characteristics bring open challenges when being exploited for SLAM. This paper studies the use of a Frequency Modulated Continuous Wave radar for SLAM in large-scale outdoor environments. We propose a full radar SLAM system, including a novel radar motion estimation algorithm that leverages radar geometry for reliable feature tracking. It also optimally compensates motion distortion and estimates pose by joint optimization. Its loop closure component is designed to be simple yet efficient for radar imagery by capturing and exploiting structural information of the surrounding environment. Extensive experiments on three public radar datasets, ranging from city streets and residential areas to countryside and highways, show competitive accuracy and reliability performance of the proposed radar SLAM system compared to the state-of-the-art LiDAR, vision and radar methods. The results show that our system is technically viable in achieving reliable SLAM in extreme weather conditions on the RADIATE Dataset, for example, heavy snow and dense fog, demonstrating the promising potential of using radar for all-weather localization and mapping. © The Author(s) 2022.
- Autonomous Underwater Robotic Grasping Research Based on Navigation and Hierarchical OperationC. Wang, Q. Zhang, X. Wang, and 3 more authors2022
This paper proposes a new framework for the autonomous underwater operation of underwater vehicle manipulator systems (UVMS), which is modular, standardized, and hierarchical. The framework consists of three subsystems: perception, navigation, and grasping. The perception module is based on an underwater stereo vision system, which provides effective environment and target information for the navigation and grasping modules. The navigation module is based on ORBSLAM and acoustic odometry, which generates the global map and plans a trajectory for the first initial stage. The grasping module generates the target grasping pose based on the extracted point cloud and the current robot state, and then executes the grasping task based on the motion planner. The proposed system is tested to perform several underwater target grasping tasks in a water tank, demonstrating the effectiveness of the system. © 2022 IEEE.
- Multi-Task Reinforcement Learning based Mobile Manipulation Control for Dynamic Object Tracking and GraspingC. Wang, Q. Zhang, X. Wang, and 3 more authors2022
Agile control of mobile manipulator is challenging because of the high complexity coupled by the robotic system and the unstructured working environment. Tracking and grasping a dynamic object with a random trajectory is even harder. In this paper, a multi-task reinforcement learning-based mobile manipulation control framework is proposed to achieve general dynamic object tracking and grasping. Several basic types of dynamic trajectories are chosen as the training set for the task. To improve policy generalization in practice, random noise and dynamics randomization are introduced during the training process. Extensive experiments show that our trained policy can adapt to unseen random dynamic trajectories with about 0.1 m tracking error and 75% grasping success rate for dynamic objects. The trained policy can also be successfully deployed on a real mobile manipulator. © 2022 IEEE.
- Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal PlanningY. Carreno, J.H.A. Ng, Y. Petillot, and 1 more author2022
Planning for multi-robot coordination during long horizon missions in complex environments need to consider resources, temporal constraints, and uncertainty. This could be computationally expensive and impractical for online planning and execution. We propose a decoupled framework to address this. At the high-level, we plan for multi-robot missions that require coordination amongst robots considering temporal and numeric constraints. The temporal plan is decomposed into low-level plans for individual robots. At the low-level, we perform online learning and adaptation due to unexpected probabilistic outcomes to achieve mission goals. Our framework learns over time to improve the performance by (1) updating the learned domain model to reduce model prediction errors and (2) constraining the robot’s capabilities which in turn improves goal allocation. The approach provides a solution to planning problems that require long-term robot operability. We demonstrate the performance of our approach via experiments involving a fleet of heterogeneous robots. © 2022 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved
- Sliding Mode Controller for Positioning of an Underwater Vehicle Subject to Disturbances and Time DelaysH. Tugal, K. Cetin, X. Han, and 4 more authors2022
Unmanned underwater vehicles are crucial for deep-sea exploration and inspection without imposing any danger to human life due to extreme environmental conditions. But, designing a robust controller that can cope with model uncertainties, external disturbances, and time delays for such vehicles is a challenge. This paper implements a sliding mode position control algorithm with a time-delay estimation term to a remotely operated underwater vehicle to deal with disturbances, such as waves, and time delays. The controller is implemented on an underwater vehicle (BlueRov) and compared with a proportional-integral-derivative (PID) controller in a wave tank with different disturbances and when there exist delays within the communication channel. The experimental results show that the proposed control method provides better performance than the conventional PID in the presence of extreme disturbances with less control efforts. © 2022 IEEE.
2021
- From market-ready ROVs to low-cost AUVsJonatan Scharff Willners, Ignacio Carlucho, Sean Katagiri, and 8 more authorsIn OCEANS 2021: San Diego–Porto, 2021
- Marine Vehicles Localization Using Grid Cells for Path IntegrationIgnacio Carlucho, Manuel F Bailey, Mariano De Paula, and 1 more authorIn OCEANS 2021: San Diego–Porto, 2021
Autonomous Underwater Vehicles (AUVs) are platforms used for research and exploration of marine environments. However, these types of vehicles face many challenges that hinder their widespread use in the industry. One of the main limitations is obtaining accurate position estimation, due to the lack of GPS signal underwater. This estimation is usually done with Kalman filters. However, new developments in the neuroscience field have shed light on the mechanisms by which mammals are able to obtain a reliable estimation of their current position based on external and internal motion cues. A new type of neuron, called Grid cells, has been shown to be part of path integration system in the brain. In this article, we show how grid cells can be used for obtaining a position estimation of underwater vehicles. The model of grid cells used requires only the linear velocities together with heading orientation and provides a reliable estimation of the vehicle’s position. We provide simulation results for an AUV which show the feasibility of our proposed methodology.
- Path Planning for Manipulation Using Experience-Driven Random TreesE. Pairet, C. Chamzas, Y.R. Petillot, and 1 more authorIEEE Robotics and Automation Letters, 2021
Robotic systems may frequently come across similar manipulation planning problems that result in similar motion plans. Instead of planning each problem from scratch, it is preferable to leverage previously computed motion plans, i.e., experiences, to ease the planning. Different approaches have been proposed to exploit prior information on novel task instances. These methods, however, rely on a vast repertoire of experiences and fail when none relates closely to the current problem. Thus, an open challenge is the ability to generalise prior experiences to task instances that do not necessarily resemble the prior. This work tackles the above challenge with the proposition that experiences are ’decomposable’ and ’malleable,’ i.e., parts of an experience are suitable to relevantly explore the connectivity of the robot-Task space even in non-experienced regions. Two new planners result from this insight: experience-driven random trees (ERT) and its bi-directional version ERTConnect. These planners adopt a tree sampling-based strategy that incrementally extracts and modulates parts of a single path experience to compose a valid motion plan. We demonstrate our method on task instances that significantly differ from the prior experiences, and compare with related state-of-The-Art experience-based planners. While their repairing strategies fail to generalise priors of tens of experiences, our planner, with a single experience, significantly outperforms them in both success rate and planning time. Our planners are implemented and freely available in the Open Motion Planning Library. © 2016 IEEE.
- Online 3-dimensional path planning with kinematic constraints in unknown environments using hybrid a* With tree pruningJ.S. Willners, D. Gonzalez-Adell, J.D. Hernández, and 2 more authorsSensors (Switzerland), 2021
In this paper we present an extension to the hybrid A* (HA*) path planner. This extension allows autonomous underwater vehicles (AUVs) to plan paths in 3-dimensional (3D) environments. The proposed approach enables the robot to operate in a safe manner by accounting for the vehicle’s motion constraints, thus avoiding collisions and ensuring that the calculated paths are feasible. Secondly, we propose an improvement for operations in unexplored or partially known environments by endowing the planner with a tree pruning procedure, which maintains a valid and feasible search-tree during operation. When the robot senses new obstacles in the environment that invalidate its current path, the planner prunes the tree of branches which collides with the environment. The path planning algorithm is then initialised with the pruned tree, enabling it to find a solution in a lower time than replanning from scratch. We present results obtained through simulation which show that HA* performs better in known underwater environments than compared algorithms in regards to planning time, path length and success rate. For unknown environments, we show that the tree pruning procedure reduces the total planning time needed in a variety of environments compared to running the full planning algorithm during replanning. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Learning-Based Underwater Autonomous Grasping via 3D Point CloudC. Wang, Q. Zhang, S. Li, and 4 more authors2021
Underwater autonomous grasping is a challenging task for robotic research. In this paper, we propose a learning-based underwater grasping method using 3D point cloud generated from an underwater stereo camera. First, we use Pinax-model for accurate refraction correction of a stereo camera in a flat-pane housing. Second, dense point cloud of the target is generated using the calibrated stereo images. An improved Grasp Pose Detection (GPD) method is then developed to generate the candidate grasping poses and select the best one based on kinematic constraints. Finally, an optimal trajectory is planned to finish the grasping task. Experiments in a water tank have proved the effectiveness of our method. © 2021 MTS.
- Underwater Visual Acoustic SLAM with Extrinsic CalibrationS. Xu, T. Luczynski, J.S. Willners, and 4 more authors2021
Underwater scenarios are challenging for visual Simultaneous Localization and Mapping (SLAM) due to limited visibility and intermittently losing structures in image views. In this paper, we propose a visual acoustic bundle adjustment system which fuses a camera and a Doppler Velocity Log (DVL) in a graph SLAM framework for reliable underwater localization and mapping. In order to fuse the vision with the acoustic measurements, an calibration algorithm is also designed to estimate extrinsic parameters between a camera and a DVL using features detected in scenes. Experimental results in a tank and an offshore wind farm show the proposed method can achieve better robustness and localization accuracy than pure visual SLAM, especially in visually challenging scenarios, and the extrinsic calibration parameters can be accurately estimated, even when initialized with a random guess. © 2021 IEEE.
- Robust Underwater Visual SLAM Fusing Acoustic SensingE. Vargas, R. Scona, J.S. Willners, and 4 more authors2021
In this paper, we propose an approach for robust visual Simultaneous Localisation and Mapping (SLAM) in underwater environments leveraging acoustic, inertial and altimeter/depth sensors. Underwater visual SLAM is challenging due to factors including poor visibility caused by suspended particles in water, a lack of light and insufficient texture in the scene. Because of this, many state-of-the-art approaches rely on acoustic sensing instead of vision for underwater navigation. Building on the sparse visual SLAM system ORB-SLAM2, this paper proposes to improve the robustness of camera pose estimation in underwater environments by leveraging acoustic odometry, which derives a drifting estimate of the 6-DoF robot pose from fusion of a Doppler Velocity Log (DVL), a gyroscope and an altimeter or depth sensor. Acoustic odometry estimates are used as motion priors and we formulate pose residuals that are integrated within the camera pose tracking, local and global bundle adjustment procedures of ORB-SLAM2. The original design of ORB-SLAM2 supports a single map and it enters relocalisation when tracking is lost. This is a significant problem for scenarios where a robot does a continuous scanning motion without returning to a previously visited location. One of our main contributions is to enable the system to create a new map whenever it encounters a new scene where visual odometry can work. This new map is connected with its predecessor in a common graph using estimates from the proposed acoustic odometry. Experimental results on two underwater vehicles demonstrate the increased robustness of our approach compared to baseline ORB-SLAM2 in both controlled, uncontrolled and field environments. © 2021 IEEE
- Robust underwater SLAM using autonomous relocalisationJ.S. Willners, Y. Carreno, S. Xu, and 6 more authors2021
This paper presents a robust underwater simultaneous localisation and mapping (SLAM) framework using autonomous relocalisation. The proposed approach strives to maintain a single consistent map during operation and updates its current plan when the SLAM loses feature tracking. The updated plan transverses viewpoints that are likely to aid in merging the current map into the global map. We present the sub-systems of the framework: the SLAM, viewpoint generation, and high level planning. In-water experiments show the advantage of our approach used on an autonomous underwater vehicle (AUV) performing inspections. © 2021 The Authors.
2020
- Self-assessment of grasp affordance transferP. Ardon, E. Pairet, Y. Petillot, and 3 more authors2020
Reasoning about object grasp affordances allows an autonomous agent to estimate the most suitable grasp to execute a task. While current approaches for estimating grasp affordances are effective, their prediction is driven by hypotheses on visual features rather than an indicator of a proposal’s suitability for an affordance task. Consequently, these works cannot guarantee any level of performance when executing a task and, in fact, not even ensure successful task completion. In this work, we present a pipeline for self-assessment of grasp affordance transfer (SAGAT) based on prior experiences. We visually detect a grasp affordance region to extract multiple grasp affordance configuration candidates. Using these candidates, we forward simulate the outcome of executing the affordance task to analyse the relation between task outcome and grasp candidates. The relations are ranked by performance success with a heuristic confidence function and used to build a library of affordance task experiences. The library is later queried to perform one-shot transfer estimation of the best grasp configuration on new objects. Experimental evaluation shows that our method exhibits a significant performance improvement up to 11.7% against current state-of-the-art methods on grasp affordance detection. Experiments on a PR2 robotic platform demonstrate our method’s highly reliable deployability to deal with real-world task affordance problems. © 2020 IEEE.
- RadarSLAM: Radar based large-scale SLAM in all weathersZ. Hong, Y. Petillot, and S. Wang2020
Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been presented in last decade using different sensor modalities. However, robust SLAM in extreme weather conditions is still an open research problem. In this paper, RadarSLAM, a full radar based graph SLAM system, is proposed for reliable localization and mapping in large-scale environments. It is composed of pose tracking, local mapping, loop closure detection and pose graph optimization, enhanced by novel feature matching and probabilistic point cloud generation on radar images. Extensive experiments are conducted on a public radar dataset and several self-collected radar sequences, demonstrating the state-of-the-art reliability and localization accuracy in various adverse weather conditions, such as dark night, dense fog and heavy snowfall. © 2020 IEEE.
- A Comparison of Few-Shot Learning Methods for Underwater Optical and Sonar Image ClassificationM. Ochal, J. Vazquez, Y. Petillot, and 1 more author2020
Deep convolutional neural networks generally perform well in underwater object recognition tasks on both optical and sonar images. Many such methods require hundreds, if not thousands, of images per class to generalize well to unseen examples. However, obtaining and labeling sufficiently large volumes of data can be relatively costly and time-consuming, especially when observing rare objects or performing real-time operations. Few-Shot Learning (FSL) efforts have produced many promising methods to deal with low data availability. However, little attention has been given in the underwater domain, where the style of images poses additional challenges for object recognition algorithms. To the best of our knowledge, this is the first paper to evaluate and compare several supervised and semi-supervised Few-Shot Learning (FSL) methods using underwater optical and side-scan sonar imagery. Our results show that FSL methods offer a significant advantage over the traditional transfer learning methods that fine-tune pre-trained models. We hope that our work will help apply FSL to autonomous underwater systems and expand their learning capabilities. © 2020 IEEE.
- A decentralised strategy for heterogeneous AUV missions via goal distribution and temporal planningY. Carreno, È. Pairet, Y. Petillot, and 1 more author2020
Heterogeneous multi-robot systems offer the potential to support complex missions, such as those needed for persistent autonomy in underwater domains. Such systems enable each robot to be optimised for specific tasks to better manage dynamic situations. In this context, temporal planning can generate plans to support the execution of multi-robot missions. However, the task distribution quality in the generated plans is often poor due to the strategies that existing planners employ to search for suitable actions, which do not tend to optimise task allocation. In this paper, we propose a new algorithm called the Decentralised Heterogeneous Robot Task Allocator (DHRTA) which enhances goal distribution by considering task spatial distribution, execution time, and the capabilities of the available robots. DHRTA is the first phase of our decentralised planning strategy which supports individual robot plan generation using temporal planners. Experiments illustrate the robustness of the approach and indicate improvements in plan quality by reducing the planning time, mission time and the rate of mission failures. Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
- Learning mobile manipulation through deep reinforcement learningC. Wang, Q. Zhang, Q. Tian, and 5 more authorsSensors (Switzerland), 2020
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
- Towards Robust Mission Execution via Temporal and Contingent PlanningY. Carreno, Y. Petillot, and R.P.A. PetrickLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020
In this work, we present a general approach to task planning based on the combination of temporal planning, contingent planning and run-time sensing. The strategy provides a solution for generating plans that can adapt during mission execution by reasoning about the data acquired by the sensory system. The approach detects actions that can change the initial plan and evaluates possible outcomes. We demonstrate the effectiveness of our approach on two different experiments in a maritime environment where the robots have to inspect the state of a valve and execute actions based on the online sensor information. © 2020, Springer Nature Switzerland AG.
- Task allocation strategy for heterogeneous robot teams in offshore missionsY. Carreno, È. Pairet, Y. Petillot, and 1 more author2020
Heterogeneous robot fleets are capable of supporting dynamic and resource-constrained missions. While current temporal AI planners are able to deal with multi-robot planning problems by producing plans that take into account the individual robot capabilities and task requirements, these approaches deal with the high-dimensionality of the state space inefficiently, leading to multi-robot plans with poor plan quality. This paper proposes a novel task allocation strategy called Multi-Role Goal Assignment (MRGA) which enables for more efficient computation of plans using temporal planners. The approach allocates a mission’s goals based on robot capabilities, the redundancy of the sensor system, the spatial distribution of the goals and task implementation time, avoiding the need to compute a large number of possible assignments. We demonstrate the applicability of the strategy with multiple robots operating jointly in an offshore platform. Experiments demonstrate that our approach allows for more robust solutions and improved plan quality while significantly reducing planning time. © 2020 International Foundation for Autonomous.
2019
- TextPlace: Visual place recognition and topological localization through reading scene textsZ. Hong, Y. Petillot, D. Lane, and 2 more authors2019
Visual place recognition is a fundamental problem for many vision based applications. Sparse feature and deep learning based methods have been successful and dominant over the decade. However, most of them do not explicitly leverage high-level semantic information to deal with challenging scenarios where they may fail. This paper proposes a novel visual place recognition algorithm, termed TextPlace, based on scene texts in the wild. Since scene texts are high-level information invariant to illumination changes and very distinct for different places when considering spatial correlation, it is beneficial for visual place recognition tasks under extreme appearance changes and perceptual aliasing. It also takes spatial-temporal dependence between scene texts into account for topological localization. Extensive experiments show that TextPlace achieves state-of-the-art performance, verifying the effectiveness of using high-level scene texts for robust visual place recognition in urban areas. © 2019 IEEE.
2018
- Adaptive low-level control of autonomous underwater vehicles using deep reinforcement learningIgnacio Carlucho, Mariano De Paula, Sen Wang, and 2 more authorsRobotics and Autonomous Systems, 2018
- AUV Position Tracking Control Using End-to-End Deep Reinforcement LearningIgnacio Carlucho, Mariano De Paula, Sen Wang, and 3 more authorsIn OCEANS 2018 MTS/IEEE Charleston, 2018