TY - JOUR
T1 - Parallel Computing as a Tool for Tuning the Gains of Automatic Control Laws
AU - Antonio Cruz, Mayra
AU - Silva Ortigoza, Ramon
AU - Marquez Sanchez, Celso
AU - Hernandez Guzman, Victor Manuel
AU - Sandoval Gutierrez, Jacobo
AU - Herrera Lozada, Juan Carlos
N1 - Publisher Copyright:
© 2003-2012 IEEE.
PY - 2017/6
Y1 - 2017/6
N2 - Select the gains of any automatic control law is not an easy task, since to do so expertise and specialized knowledge are required. Also, it demands a lot of time due to, usually, such a selection is carried out by the trial and error method, which implies to rebuild the test of the control every time its gains are modified until a 'good' gain selection be found. Thus, this paper presents a procedure based on parallel computing, which facilitates the gain selection of an automatic control law and reduces the time spent on that. Such a procedure consists on the following four steps: i) By taking into account a tuning rule and the number of control gains, a finite set of numerical values are generated and grouped in arrays through a Matlab script. Hence, a large number of combinations to select such gains is obtained. ii) With these combinations, numerical simulations of the system in closed-loop are simultaneously performed through Matlab Parallel Computing Toolbox. iii) The several obtained system responses are treated to determine the ones achieving the control objective. iv) Lastly, the gain combination that delivers a control response with the smallest error is identified. The proposed procedure is implemented to select the gains of a state feedback control that stabilizes the Furuta pendulum in the inverted upright position. The best gain selection of the control is verified trough an experimental test with a real Furuta pendulum. The main advantages of the proposed procedure are the several gain combinations that can be simulated in a short time compared with the classical trial and error method and the effectiveness for experimental application.
AB - Select the gains of any automatic control law is not an easy task, since to do so expertise and specialized knowledge are required. Also, it demands a lot of time due to, usually, such a selection is carried out by the trial and error method, which implies to rebuild the test of the control every time its gains are modified until a 'good' gain selection be found. Thus, this paper presents a procedure based on parallel computing, which facilitates the gain selection of an automatic control law and reduces the time spent on that. Such a procedure consists on the following four steps: i) By taking into account a tuning rule and the number of control gains, a finite set of numerical values are generated and grouped in arrays through a Matlab script. Hence, a large number of combinations to select such gains is obtained. ii) With these combinations, numerical simulations of the system in closed-loop are simultaneously performed through Matlab Parallel Computing Toolbox. iii) The several obtained system responses are treated to determine the ones achieving the control objective. iv) Lastly, the gain combination that delivers a control response with the smallest error is identified. The proposed procedure is implemented to select the gains of a state feedback control that stabilizes the Furuta pendulum in the inverted upright position. The best gain selection of the control is verified trough an experimental test with a real Furuta pendulum. The main advantages of the proposed procedure are the several gain combinations that can be simulated in a short time compared with the classical trial and error method and the effectiveness for experimental application.
KW - Furuta pendulum
KW - Parallel computing
KW - automatic control
KW - gain selection
KW - stabilization
KW - state feedback
UR - http://www.scopus.com/inward/record.url?scp=85019713115&partnerID=8YFLogxK
U2 - 10.1109/TLA.2017.7932708
DO - 10.1109/TLA.2017.7932708
M3 - Artículo
SN - 1548-0992
VL - 15
SP - 1189
EP - 1196
JO - IEEE Latin America Transactions
JF - IEEE Latin America Transactions
IS - 6
M1 - 7932708
ER -