TY - GEN
T1 - Video processing in real-time in FPGA
AU - Morales, Erick
AU - Herrera, Roberto
N1 - Publisher Copyright:
© 2018 SPIE.
PY - 2018
Y1 - 2018
N2 - In computational vision has a high computational cost, although, some algorithms had been implemented to get image features, that allow assorting, object and face recognition and so on. Some solutions have been developed in computers, DSP and GPU those that are not optimal with time. In order to improve the performance of these algorithms, we are implementing the SURF algorithm in embedded systems (FPGA) and applied to non-controller environments that require real-time response. In this work we development a SURF algorithm in order to improve time processing in video and image processing, we use an FPGA to apply that algorithm, we compare the time processing with different devices and the features found it into the images, this features will be invariant to scale, rotation and lighting, the SURF algorithm localize the interest points (features), its is using in facial recognition, object detection, stereo vision and so on. This algorithm has a high computational cost because of use a lot of data, in order to reduce the high cost we implemented LUTs and reduce time with code. With this work we try to find the best way to implement the algorithm into embedded systems, in order to use in non-controller environments and robots autonomous.
AB - In computational vision has a high computational cost, although, some algorithms had been implemented to get image features, that allow assorting, object and face recognition and so on. Some solutions have been developed in computers, DSP and GPU those that are not optimal with time. In order to improve the performance of these algorithms, we are implementing the SURF algorithm in embedded systems (FPGA) and applied to non-controller environments that require real-time response. In this work we development a SURF algorithm in order to improve time processing in video and image processing, we use an FPGA to apply that algorithm, we compare the time processing with different devices and the features found it into the images, this features will be invariant to scale, rotation and lighting, the SURF algorithm localize the interest points (features), its is using in facial recognition, object detection, stereo vision and so on. This algorithm has a high computational cost because of use a lot of data, in order to reduce the high cost we implemented LUTs and reduce time with code. With this work we try to find the best way to implement the algorithm into embedded systems, in order to use in non-controller environments and robots autonomous.
UR - http://www.scopus.com/inward/record.url?scp=85054609451&partnerID=8YFLogxK
U2 - 10.1117/12.2322021
DO - 10.1117/12.2322021
M3 - Contribución a la conferencia
AN - SCOPUS:85054609451
SN - 9781510620735
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Optics and Photonics for Information Processing XII
A2 - Iftekharuddin, Khan M.
A2 - Diaz-Ramirez, Victor H.
A2 - Vazquez, Mireya Garcia
A2 - Awwal, Abdul A. S.
A2 - Marquez, Andres
PB - SPIE
T2 - Optics and Photonics for Information Processing XII 2018
Y2 - 19 August 2018 through 20 August 2018
ER -