Drones have the potential to revolutionize power
line inspection by increasing productivity, reducing inspection
time, improving data quality, and eliminating the risks for
human operators. Current state-of-the-art systems for power
line inspection have two shortcomings: (i) control is decoupled
from perception and needs accurate information about the
location of the power lines and masts; (ii) obstacle avoidance is
decoupled from the power line tracking, which results in poor
tracking in the vicinity of the power masts, and, consequently,
in decreased data quality for visual inspection. In this work,
we propose a model predictive controller (MPC) that overcomes
these limitations by tightly coupling perception and action. Our
controller generates commands that maximize the visibility of
the power lines while, at the same time, safely avoiding the
power masts. For power line detection, we propose a lightweight
learning-based detector that is trained only on synthetic data
and is able to transfer zero-shot to real-world power line
images. We validate our system in simulation and real-world
experiments on a mock-up power line infrastructure. We release
our code and datasets to the public.
We propose a system solution to achieve data-efficient,
decentralized state estimation for a team of flying robots using thermal
images and inertial measurements. Each robot can fly independently, and
exchange data when possible to refine its state estimate. Our system
front-end applies an online photometric calibration to refine the thermal
images so as to enhance feature tracking and place recognition. Our system
back-end uses a covariance intersection fusion strategy to neglect the
cross-correlation between agents so as to lower memory usage and
computational cost. The communication pipeline uses Vector of Locally
Aggregated Descriptors (VLAD) to construct a request-response policy that
requires low bandwidth usage. We test our collaborative method on both
synthetic and real-world data. Our results show that the proposed method
improves by up to 46% trajectory estimation with respect to an
individual-agent approach, while reducing up to 89% the communication
exchange. Datasets and code are released to the public, extending the
already-public JPL xVIO library.
Event-aided Direct Sparse odometry (Oral Presentation)
Hidalgo-Carrió, Javier,
Gallego, Guillermo,
and Scaramuzza, Davide
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
2022
We introduce EDS, a direct monocular visual odometry using events and frames.
Our algorithm leverages the event generation model to track the camera motion in
the blind time between frames. The method formulates a direct probabilistic
approach of observed brightness increments. Per-pixel brightness increments are
predicted using a sparse number of selected 3D points and are compared to the
events via the brightness increment error to estimate camera motion. The method
recovers a semi-dense 3D map using photometric bundle adjustment. EDS is the
first method to perform 6-DOF VO using events and frames with a direct approach.
By design it overcomes the problem of changing appearance in indirect methods.
We also show that, for a target error performance, EDS can work at lower frame
rates than state-of-the-art frame-based VO solutions. This opens the door to
low-power motion-tracking applications where frames are sparingly triggered "on
demand" and our method tracks the motion in between. We release code and
datasets to the public.
2021
Concept, Development and Testing of Mars Rover Prototypes for ESA Planetary Exploration
Azkarate, Martín,
Gerbes, Levin,
Wiese, Tim,
Zwick, Martin,
Pagnamenta, Marco,
Hidalgo-Carrió, Javier,
Poulakis, Pantelis,
and Pérez-del-Pulgar, Carlos J.
This paper presents the system architecture and design of two planetary
rover laboratory prototypes developedat the European Space Agency (ESA). These
research platforms have been developed to provide early prototypes for validation
of designs and serve ESA’s Automation & Robotics Lab infrastructure as testbeds
for continuous research and testing. Both rovers have been built considering
the constraints of Space Systems with the sufficient level of
representativeness to allow rapid prototyping. They avoid strictly
space-qualified components and designs that present a major cost burden
and frequently lack the flexibility or modularity that the lab
environment requires for its investigations. This design approach is followed
for all the mechanical, electrical, and software aspects of the system. In
thispaper, two ExoMars mission-representative rovers, the ExoMars Testing Rover
(ExoTeR) and the Martian Rover Testbed for Autonomy (MaRTA), are thoroughly
described. The lessons learntand experience gained while running several
research activitiesand test campaigns are also presented. Finally, the paper
aims toprovide some insight on how to reduce the gap between lab R&D and flight
implementation by anticipating system constraints when building and testing these
platforms.
Powerline Tracking with Event Cameras
Dietsche, Alexander,
Cioffi, Giovanni,
Hidalgo-Carrió, Javier,
and Scaramuzza, Davide
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
2021
Autonomous inspection of powerlines with quadrotors is challenging.
Flights require persistent perception to keep a close look at the lines. We
propose a method that uses event cameras to robustly track powerlines. Event
cameras are inherently robust to motion blur, have low latency, and high dynamic
range. Such properties are advantageous for autonomous inspection of powerlines
with drones, where fast motions and challenging illumination conditions are
ordinary. Our method identifies lines in the stream of events by detecting
planes in the spatio-temporal signal, and tracks them through time. The
implementation runs onboard and is capable of detecting multiple distinct lines
in real time with rates of up to 320 thousand events per second. The
performance is evaluated in real-world flights along a powerline. The tracker is
able to persistently track the powerlines, with a mean lifetime of the line
10x longer than existing approaches.
Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction
Event cameras are novel vision sensors that report
per-pixel brightness changes as a stream of asynchronous “events”. They
offer significant advantages compared to standard cameras due to their high
temporal resolution, high dynamic range and lack of motion blur.
However, events only measurethe varying component of the visual
signal, which limits their ability to encode scene context. By
contrast, standard cameras measure absolute intensity frames, which capture a
much richer representation of the scene. Both sensors are thus
complementary.However, due to the asynchronous nature of events,
combining them with synchronous images remains challenging, especially for
learning-based methods. This is because traditional recurrent neural networks
(RNNs) are not designed for asynchronousand irregular data from
additional sensors. To address this challenge, we introduce Recurrent
Asynchronous Multimodal(RAM) networks, which generalize traditional RNNs
to handle asynchronous and irregular data from multiple sensors. Inspired by
traditional RNNs, RAM networks maintain a hidden state that is updated
asynchronously and can be queried at any time to generate a
prediction. We apply this novel architecture to monocular depth
estimation with events and frames where weshow an improvement over
state-of-the-art methods by up to 30% in terms of mean absolute depth error. To
enable further research on multimodal learning with events, we release
Event Scape, a newdataset with events, intensity frames, semantic labels, and
depthmaps recorded in the CARLA simulator.
2020
Learning Monocular Dense Depth from Events
Hidalgo-Carrió, Javier,
Gehrig, Daniel,
and Scaramuzza, Davide
IEEE International Conference on 3D Vision (3DV)
2020
Event cameras are novel sensors that output brightness
changes in the form of a stream of asynchronous ”events” instead of
intensity frames. Compared to conventional image sensors, they offer
significant advantages:high temporal resolution, high dynamic range, no
motionblur, and much lower bandwidth. Recently, learning-based approaches have
been applied to event-based data, thus unlocking their potential and making
significant progress in avariety of tasks, such as monocular depth prediction.
Mostexisting approaches use standard feed-forward architectures to
generate network predictions, which do not leverage the temporal consistency
presents in the event stream.We propose a recurrent architecture to solve this
task andshow significant improvement over standard feed-forwardmethods. In
particular, our method generates dense depth predictions using a monocular
setup, which has not beenshown previously. We pretrain our model
using a newdataset containing events and depth maps recorded in the CARLA
simulator. We test our method on the Multi Vehicle Stereo Event Camera
Dataset (MVSEC). Quantitative experiments show up to 50% improvement in
average deptherror with respect to previous event-based methods
Video to Events: Recycling Video Datasets for Event Cameras
Gehrig, Daniel,
Gehrig, Mathias,
Hidalgo-Carrió, Javier,
and Scaramuzza, Davide
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
2020
vent cameras are novel sensors that output brightnesschanges in
the form of a stream of asynchronous “events”instead of intensity frames.
They offer significant advan-tages with respect to conventional cameras:
high dynamicrange (HDR), high temporal resolution, and no motion
n blur.Recently, novel learning approaches operating on eventdata have
achieved impressive results. Yet, these methodsrequire a large amount of
event data for training, which ishardly available due the novelty of event
sensors in com-puter vision research. In this paper, we present a
methodthat addresses these needs by converting any existing videodataset
recorded with conventional cameras tosyntheticevent data. This
unlocks the use of a virtually unlimitednumber of existing video
datasets for training networks de-signed for real event data. We evaluate
our method on tworelevant vision tasks, i.e., object recognition and
semanticsegmentation, and show that models trained on syntheticevents
have several benefits: (i) they generalize well to realevent data, even in
scenarios where standard-camera im-ages are blurry or overexposed, by
inheriting the outstand-ing properties of event cameras; (ii) they can be
used forfine-tuning on real data to improve over state-of-the-art forboth
classification and semantic segmentation.
2018
Adaptive localization and mapping with application to planetary
rovers
Hidalgo-Carrió, Javier,
Poulakis, Pantelis,
and Kirchner, Frank
Future exploration rovers will be equipped with substantial
onboard autonomy. SLAM is a fundamental part and has a close connection with
robot perception, planning, and control. The community has made great
progress in the past decade by enabling real-world solutions and is
addressing important challenges in high-level scalability, resources
awareness, and domain adaptation. A novel adaptive SLAM system is proposed
to accomplish rover navigation and computational demands. It starts from a
three-dimensional odometry dead reckoning solution and builds up to a full
graph optimization that takes into account rover traction performance. A
complete kinematics of the rover locomotion system improves the wheel
odometry solution. In addition, an odometry error model is inferred using
Gaussian processes (GPs) to predict nonsystematic errors induced by poor
traction of the rover with the terrain. The nonparametric GP regression
serves to adapt the localization and mapping to the current navigation
demands (domain adaptation). The method brings scalability and adaptiveness
to modern SLAM. Therefore, an adaptive strategy develops to adjust the image
frame rate (active perception) and to influence the optimization backend by
including high informative keyframes in the graph (adaptive information
gain). The work is experimentally verified on a representative planetary
rover under a realistic field test scenario. The results show a modern SLAM
systems that adapt to the predicted error. The system maintains accuracy
with less number of nodes taking the most benefit of both wheel and visual
methods in a consistent graph-based smoothing approach.
2017
Gaussian Process Estimation of Odometry Errors for Localization and
Mapping
Hidalgo-Carrió, Javier,
Hennes, Daniel,
Schwendner, Jakob,
and Kirchner, Frank
In IEEE International Conference on Robotics and Automation (ICRA)
2017
Since early in robotics the performance of odometry techniques has
been of constant research for mobile robots. This is due to its direct
influence on localization. The pose error grows unbounded in
dead-reckoning systems and its uncertainty has negative impacts in
localization and mapping (i.e. SLAM). The dead-reckoning performance in
terms of residuals, i.e. the difference between the expected and the real pose
state, is related to the statistical error or uncertainty in probabilistic
motion models. A novel approach to model odometry errors using Gaussian
processes (GPs) is presented. The methodology trains a GP on the residual
between the non-linear parametric motion model and the ground truth training
data. The result is a GP over odometry residuals which provides an expected
value and its uncertainty in order to enhance the belief with respect to the
parametric model. The localization and mapping benefits from a comprehensive
GP-odometry residuals model. The approach is applied to a planetary rover in an
unstructured environment. We show that our approach enhances visual SLAM by
efficiently computing image frames and effectively distributing keyframes.
2016
On the Design of Attitude-Heading Reference Systems Using the Allan
Variance.
Hidalgo-Carrió, Javier,
Arnold, Sascha,
and Poulakis, Pantelis
IEEE transactions on ultrasonics, ferroelectrics, and frequency
control
2016
The Allan variance is a method to characterize stochastic random
processes. The technique was originally developed to characterize the
stability of atomic clocks and has also been successfully applied to the
characterization of inertial sensors. Inertial navigation systems (INS) can
provide accurate results in a short time, which tend to rapidly degrade in
longer time intervals. During the last decade, the performance of inertial
sensors has significantly improved, particularly in terms of signal stability,
mechanical robustness, and power consumption. The mass and volume of inertial
sensors have also been significantly reduced, offering system-level design
and accommodation advantages. This paper presents a complete methodology for
the characterization and modeling of inertial sensors using the Allan variance,
with direct application to navigation systems. Although the concept of sensor
fusion is relatively straightforward, accurate characterization and
sensor-information filtering is not a trivial task, yet they are essential
for good performance. A complete and reproducible methodology utilizing the
Allan variance, including all the intermediate steps, is described. An
end-to-end (E2E) process for sensor-error characterization and modeling up
to the final integration in the sensor-fusion scheme is explained in detail.
The strength of this approach is demonstrated with representative tests on
novel, high-grade inertial sensors. Experimental navigation results are
presented from two distinct robotic applications: a planetary exploration rover
prototype and an autonomous underwater vehicle (AUV).
EnviRe - Environment Representation for Long-term Autonomy
Data representation is a key element for robots to navigate and
perform autonomous tasks in unstructured environments. This is due to
autonomous tasks which call for an environment model suitable for
systems that might operate during long periods of time. This work
presents EnviRe, an environment representation model that facilitates
long-term tasks for autonomous systems in real-world scenarios. The
EnviRe model is a strongly connected directed graph which allows sensor
data acquisition, data processing, reasoning and operations among different data
formats in order to accomplish the navigation and planning demands.
2015
First Experimental Investigations on Wheel-Walking for Improving
Triple-Bogie Rover Locomotion Performances
Deployment actuators of a triple-bogie rover locomotion platform can
be used to perform Wheel-Walking (WW) manoeuvres. How WW could affect the
traversing capabilities of rovers is a recurrent debate in the planetary
robotics community. The Automation and Robotics Section of ESTEC has
initiated a long term project to evaluate the performance of WW manoeuvres
in different scenarios. This paper presents the first experimental results
on this project, obtained during the test campaign run on November 2014 at
the Planetary Robotics Lab (PRL) of ESTEC, and shows the performance
analysis made when comparing WW with standard rolling. The PRL rover
prototype ExoTeR was used to test three different scenarios: entrapment in
loose soil, up-slope traverse and lander egressing. WW locomotion showed
increased capabilities in all scenarios and proved its relevance and
advantages for planetary exploration missions.
Soil Contact Model for Environment Representation and Simulation of
Legged Robot on Planetary Surface
Yoo, Yong-Ho,
Schwendner, Jakob,
Hidalgo-Carrió, Javier,
and Kirchner, Frank
This paper introduces a new soil contact model called plastic
terramechanics particle (PTP). The purpose of the model is the simulation of
legged robots on planetary surfaces. An analysis of walking and contact
behaviors of a legged robot on the soil surface has been performed. From the
analysis, fundamental contact behaviors have been selected and realized in the
PTP soil contact model where complex foot-soil contact dynamics is modeled by
using volume-cells and particles. This approach considerably reduces the
modeling complexity and provides real-time processing capabilities. The work
also shows an integration of the soil contact model into the environment
representation used for navigation, operation and simulation tasks of legged
robots. Typical environment representations for robotic applications uses 3D
maps based on the multi-level laser scan data acquired by exteroceptive sensors
mounted on the robot. The manuscript introduces the environment representation
as a bridge between real and simulated world shared among robotic subsystems.
Entern - Environment Modelling and Navigation for Robotic
Space-Exploration
Schwendner, Jakob,
Hidalgo-Carrió, Javier,
Dominguez, Raul,
Planthaber, Steffen,
Yoo, Yong-Ho,
Asadi, Behnam,
Machowinski, Janosch,
Rauch, Christian,
and Kirchner, Frank
In Advanced Space Technologies for Robotics and Automation
2015
Lunar and planetary craters and caves are of special scientific
interest and have the potential to provide shelter for human habitats.
Robots could provide the means to explore these difficult environments.
A number of challenges are involved with the exploration: The robots have to be
highly mobile to negotiate the difficult terrain, and need to perform most of
their task autonomously, especially in caves lacking radio communication. This
paper gives an overview of the Entern project and the associated goals and
challenges. This includes the research of technologies for operations,
environment representation and navigation. Special emphasis is put into the
development of on-board simulation, to improve the reliability and the
operational envelope of the robots. Further, a description of evaluation
scenarios in relevant earth analogue environments is provided.
2014
A Validation Process for Underwater Localization Algorithms
Hildebrandt, Marc,
Gaudig, Christopher,
Christensen, Leif,
Natarajan, Sankaranarayanan,
Hidalgo-Carrió, Javier,
Paranhos, Patrick Merz,
and Kirchner, Frank
International Journal of Advanced Robotic Systems
2014
This paper describes the validation process of a localization
algorithm for underwater vehicles. In order to develop new localization
algorithms, it is essential to characterize them with regard to their
accuracy, long-term stability and robustness to external sources of noise.
This is only possible if a gold-standard reference localization (GSRL) is
available against which any new localization algorithm (NLA) can be tested.
This process requires a vehicle which carries all the required sensor and
processing systems for both the GSRL and the NLA. This paper will show the
necessity of such a validation process, briefly sketch the test vehicle and
its capabilities, describe the challenges in computing the localizations of
both the GSRL and the NLA simultaneously for comparison, and conclude with
experimental data of real-world trials.
Static forces weighted Jacobian motion models for improved Odometry
Hidalgo-Carrió, Javier,
Babu, Ajish,
and Kirchner, Frank
In 2014 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS)
2014
The estimation of robot’s motion at thepredictionstep of any
localization framework is commonly performedusing a motion model in
conjunction with inertial measure-ments. In the context of field
robotics, articulated mobile robotshave complex chassis. They might require
a complete model incomparison with the traditionally used planar
assumption. Inthis paper, we use a Jacobian motion model-based
approachfor real-time inertial-added odometry. The work makes useof
the transformation approach [1] to accurately model 6-DoF
kinematics. The algorithm relates normal forces with theprobability
of a contact-point to slip. The result increases theaccuracy by
weighting the least-squares solution using staticforces prediction.
The method is applied to the Asguard v3system, a simple but
highly capable leg-wheel hybrid robot.The performance of the approach is
demonstrated in extensivefield testing within different unstructured
environments. In-depth error analysis and comparison with planar
odometryis discussed, resulting in a superior localization.
2013
Navigation and Slip Kinematics for High Performance Motion Models
Hidalgo-Carrió, Javier
In Symposium on Advanced Space Technologies in Robotics and
Automation
2013
When absolute positioning systems are not available, kinematics
modeling and rover chassis analyses play a dominant role in rover
localization. It is the goal of this manuscript to describe and analyze a
complete kinematic model which captures the six DoF pose (position and
orientation) while traversing uneven terrains for hybrid systems and
independently actuated wheels. The model is analyzed in order to correctly
propagate rover pose as input for a pose estimator in localization towards
efficient dead reckoning processes. Testing results are discussed for
Asguard, a leg-wheel scout rover with a simple actuation system with
enhanced maneuver capabilities.
2012
Improving Planetary Rover Attitude Estimation via MEMS Sensor
Characterization
Hidalgo-Carrió, Javier,
Poulakis, Pantelis,
Köhler, Johan,
Del-Cerro, Jaime,
and Barrientos, Antonio
Micro Electro-Mechanical Systems (MEMS) are currently being
considered in the space sector due to its suitable level of performance for
spacecrafts in terms of mechanical robustness with low power consumption, small
mass and size, and significant advantage in system design and accommodation.
However, there is still a lack of understanding regarding the performance and
testing of these new sensors, especially in planetary robotics. This paper
presents what is missing in the field: a complete methodology regarding the
characterization and modeling of MEMS sensors with direct application. A
reproducible and complete approach including all the intermediate steps, tools
and laboratory equipment is described. The process of sensor error
characterization and modeling through to the final integration in the sensor
fusion scheme is explained with detail. Although the concept of fusion is
relatively easy to comprehend, carefully characterizing and filtering sensor
information is not an easy task and is essential for good performance. The
strength of the approach has been verified with representative tests of novel
high-grade MEMS inertia sensors and exemplary planetary rover platforms with
promising results.
Kinematics Modeling of a Hybrid Wheeled-Leg Planetary Rover
Hidalgo-Carrió, Javier,
and Cordes, Florian
In International Symposium on Artificial Intelligence, Robotics and
Automation in Space
2012
It is the goal of this manuscript to describe and analyze
acomplete kinematic model for hybrid wheeled-leg roversand its
applicability to Sherpa, a flexible rover with acomplex actuation
system. Differential kinematic equa-tions of the hybrid legs are combined
to form the com-posite equation for the rover motion. The model capturesthe
6 DoF pose (position and orientation) while travers-ing uneven terrains for
hybrid systems and independentlyactuated joints. The kinematics model is
analyzed in or-der to correctly propagate rover pose as input for a
poseestimator in localization. Initial results from simulationare discussed
for Sherpa navigation kinematics towardsefficient pose estimation and dead
reckoning
Planetary Rover Localization Design: Antecedents and Directions.
Hidalgo-Carrió, Javier,
Schwendner, Jakob,
and Kirchner, Frank
In IEEE Intelligent Vehicles Symposium Workshops
2012
This paper describes the localization problem in planetary rovers
and its influence in the mission success. This manuscript gives an overview
of the localization subsystem from a system level perspective and its impact on
mission operations. It discusses current problems on the design of proper sensor
fusion scheme and addresses future challenges which needs to be pursued in order
to move forward planetary rovers into more intelligent vehicles for future
planetary missions.
Terrain aided navigation for planetary exploration missions
Schwendner, Jakob,
and Hidalgo-Carrió, Javier
In International Symposium on Artificial Intelligence, Robotics and
Automation in Space
2012
A central part of current space activities is to learn moreabout
our solar system, its origins, its resources and itsconditions for
harbouring life. A key technology for per-forming surface exploration of
celestial bodies are mo-bile robotic systems, as they are able to
withstand theharsh conditions with reasonable effort. One
importantrequirement for performing navigation of a mobile robotis the
ability to localise the system within a known ref-erence. Visual methods,
which have proven useful inthis context, can add constraints on
processing powerand environmental conditions. In this paper an
alterna-tive approach is presented which only uses inertial sen-sors and
encoders in order localise within a known map.The method is evaluated in a
simulated lunar environmentbased on LRO digital elevation maps. The results
showan average localisation error of 11 m for a travelled dis-tance of 2.3
km, assuming low-precision MEMS gyros.The proposed method can be applied to
perform resourceefficient localisation in situations where visual
methodsfail or are too costly to perform.
2011
A Miniaturised Space Qualified MEMS IMU for Rover Navigation
Requirements and Testing of a Proof of Concept Hardware Demonstrator
Rehrmann, Felix,
Schwendner, Jakob,
Cornforth, John,
Durrant, Dick,
Lindegren, Robert,
Selin, Per,
Hidalgo-Carrió, Javier,
Poulakis, Pantelis,
and Köhler, Johan
In Advanced Space Technologies for Robotics and Automation
2011
Mobile robotic systems will without a doubt become even more
relevant for space exploration missionsthan they currently are. High cost
for manned programs and recent success of robotic mission (e.g.MER) are
likely to form a shift towards robotic missions. One of the key aspects of
exploration systemsis mobility. Apart from the physical capabilities to
negotiate complex and difficult terrain, the aspect ofnavigation is also of
great importance. Global Positioning Systems like the GPS are not
availableoutside of earthbound activities. The previous successful
approaches for mobile robot navigation in space are vision based. A great
number of positive sides like environmental awareness and exploitationof the
images for science are on the up-side of this. There are however some design
constraints in orderto make the use feasible. Image processing does require
a certain amount of processing resources and a favorable positioning of the
cameras. One possible alternative could be the use of Terrain
AidedNavigation methods for vision free localization in known environments.
By using a-priori informationabout the environment (e.g. from orbiter sensor
or other sources) the information on the orientation andthe position of the
environment contact points can be used to estimate the position of the robot
withinthe map [1]. The feasibility of the this approach is demonstrated
using Digital Elevation Maps from theLRO LOLA Instrument and a simulated
mobile robot based on the Asguard [2] system. We employ aBayesian filtering
method to estimate the position of the robot within the map, and compare it
to theposition from the odometry. We also investigate the effect of the
resolution of the a-priori map on thefinal localization error. While the
proposed method should not be seen as a complete replacement ofvisual
navigation, it can augment such systems. It can also provide a localization
solution with abounded error for missions where visual processing is not
feasible due to resource or engineeringconstraints.
ESTEC Testbed Capabilities for the Performance Characterization of
Planetary Rover Localization Sensors - First Results on IMU
Investigations
Hidalgo-Carrió, Javier,
Poulakis, Pantelis,
Barrientos, Antonio,
and Del-cerro, Jaime
In ASTRA 2011 - 11th ESA Workshop on Advanced Space Technologies for
Robotics and Automation
2011
During the last year internal research activities have been carried
out at ESTEC in the line of testing high- grade inertial sensors. This article
shows the performance characterization of prototype Inertial Measurement
Units (IMUs) in terms of facilities and methodologies within ESTEC, with
focus on MEMS inertial sensors for planetary rovers. Specifically, this
article shows the system level demonstration and capabilities of the test
facilities of the ESTEC Automation & Robotics section as well as first
performance results of MEMS inertial sensors for attitude estimation on
typical planetary rover manoeuvres.
Planetary Terrain Analysis for Robotic Missions
Masarotto, V,
Joudrier, L,
Hidalgo-Carrió, Javier,
and Lorenzoni, L
In Advanced Space Technologies for Robotics and Automation
2011
From the early studies to the actual design of a
plane-tary rover mission, the knowledge of the type of terrainthat will be
encountered is crucial. Usually, a referenceterrain is defined to help the
design of the rover subsys-tems, knowing that the terrain will be different
during theactual mission. Furthermore, once the landing site is se-lected,
the evaluation of the slopes is essential to measurethe performance of an
Entry, Descent and Landing Sys-tem.The main goal of this paper is exposing
the method usedto measure the slopes distribution of a terrain from
itsDigital Elevation Maps and, through this, explain the re-quirements set
for the reference terrain for Mars mission.At the end, there are presented
methods to generate sam-ple terrains to be used for rover design and
navigationverification.