Autonomous Vehicle and Augmented Reality Usage
With the development of autonomous development technology, the need for additional applications to be used inside and outside the vehicle is increasing. As a result of the literature review, many applications have been developed to display vehicle data directly on the monitor, with reflections on glass, and on hardware devices. These applications have been developed only for a defined problem and for a particular autonomous system. In this study, a basic autonomous vehicle software infrastructure and mobile Augmented Reality application that can work on Android devices have been developed. The Mobile Augmented Reality app serves inside and outside the vehicle. In addition, this application has been shown to support multiple autonomous system infrastructures.
V. Rastogi. (2017). Virtual reality based simulation testbed for evaluation of autonomous vehicle behavior algorithms. A Thesis Presented to the Graduate School of Clemson University in Partial Fulfillment of the Requirements for the Degree Master of Science Computer Science. Clemson University.
I. E.Sutherland. (1968). A head-mounted three dimensional display. In Proceedings of the AFIPS Fall Joint Computer Conferance, pp. 757–764.
J. Fredriksson, B. Kulcsar, & J. Sjöberg. (2015). Proceedings of the 3rd international symposium on future active safety technology towards zero traffic accidents. Available at: http://publications.lib.chalmers.se/records/fulltext/222422/local_222422.pdf.
MIT Racecar. (2017). MIT racecar mobile platform. Available: http://racecar.mit.edu.
Openzeka. (2019). Openzeka online. Available at: https://openzeka.com/.
ROS. (2019). Robot operating system. Available at: http://www.ros.org/.
L. Joseph. (2018). Robot operating system for absolute beginners_ robotics programming made easy-apress. Available at: https://www.apress.com/gp/book/9781484234044.
M. Bojarski et al. (2016). End to end learning for self-driving cars. Available at: https://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf.
Y. LeCun et al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computer, pp. 541–551.
A. Krizhevsky, I. Sutskever, & G. E. Hinton. (2015). ImageNet classification with deep convolutional neural networks. Journal of Geotechnology and Geoenvironmental Engineering, 12, 04015009.
K. Z. and D. D. T. Mariusz Bojarski, Ben Firner, Beat Flepp, Larry Jackel, & Urs Muller. (2016). End-to-end deep learning for self-driving cars. Available at: https://devblogs.nvidia.com/deep-learning-self-driving-cars/.
R. T.Azuma, (1997). Survey of augmented reality. Available at: https://www.cs.unc.edu/~azuma/ARpresence.pdf.
J. Paulo Lima et al. (2017). Markerless tracking system for augmented reality in the automotive industry. Expert Systems with Applications, 82, 100–114.
P. Khandelwal, P. Swarnalatha, N. Bisht, & S. Prabu. (2015). Detection of features to track objects and segmentation using grabcut for application in marker-less augmented reality. Procedia Computer Science, 58, 698–705.
Android Developers. (2019). Android neural networks API. Available at: https://developer.android.com/ndk/guides/neuralnetworks.
Arun Mani Sam. (2019). Developing SSD-Object detection models for android using tensorflow. Available at: https://www.inspirisys.com/objectdetection_in_tensorflowdemo.pdf.
L. Moroney. (2018). Using tensorflow lite on android. Available at: https://miro.medium.com/max/1573/0*Bt9qwKDjd1xi5RDd.
TensorFlow. (2019). TensorFlow lite object detection. Available at: https://www.tensorflow.org/lite/models/object_detection/overview.
Copyright (c) 2019 International Journal of Engineering and Management Research
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.