Visual Servoing for Mobile Ground Navigation
Abstract: This paper presents a vision-based control frame- work that attempts to mitigate several shortcomings of current approaches to mobile navigation, including the requirement for detailed 3D maps. The framework defines potential fields in image space and uses a subsumption process to combine hard, physical constraints with soft, guidance constraints while guaranteeing that hard constraint information is preserved. In addition, this representation can be defined with constant size, which can enable strong run-time guarantees to be made for visual servoing- based control. The framework is demonstrated with proof-of- concept examples in simulation and the real world, as well as data sets and an open source implementation.
Vision-based Navigation for Autonomous Vehicles
This guest talk was given Dec. 11, 2017 as part of the Intelligent & Interactive Systems Talk Series at Indiana University.
Abstract: In industries as varied as mining, agriculture, health care, and automated driving, many practical applications in robotics involve interacting with intelligent agents while navigating dynamic environments. While impressive results have been demonstrated in these domains, there are still basic types of interacting navigation problems for which robust and general solutions have remained elusive. One such problem type is efficient navigation in the presence of non-cooperative and non-adversarial agents. This is the kind of problem pedestrians face when navigating crowded sidewalks or drivers face when navigating crowded roadways. Two primary reasons for difficulties addressing this problem are that the problem models used tend to exhibit prohibitive computational complexity and the problem formulations tend to have difficult-to-satisfy requirements for problem input and representations. This talk will present recent work that provides more efficient problem models for this problem, as well as new, vision-based problem formulations that seek to significantly simplify problem input and representation requirements.
Image Space Potential Fields: Constant Size Environment Representation for Vision-based Subsumption Control Architectures
This technical report presents an environment representation for use in vision-based navigation. The representation has two useful properties: 1) it has constant size, which can enable strong run-time guarantees to be made for control algorithms using it, and 2) it is structurally similar to a camera image space, which effectively allows control to operate in the sensor space rather than employing difficult, and often inaccurate, projections into a structurally different control space (e.g. Euclidean). The presented representation is intended to form the basis of a vision-based subsumption control architecture.
Encroachment Detection with Monocular Vision for Small, Low-cost, Compute-constrained Platforms
Constant Space Complexity Environment Representation for Vision-based Navigation
This paper presents a preliminary conceptual investigation into an environment representation that has constant space complexity with respect to the camera image space. This type of representation allows the planning algorithms of a mobile agent to bypass what are often complex and noisy transformations between camera image space and Euclidean space. The approach is to compute per-pixel potential values directly from processed camera data, which results in a discrete potential field that has constant space complexity with respect to the image plane. This can enable planning and control algorithms, whose complexity often depends on the size of the environment representation, to be defined with constant run-time. This type of approach can be particularly useful for platforms with strict resource constraints, such as embedded and real-time systems.