引用本文:顾斌杰,潘丰.精确增量式在线v型支持向量回归机学习算法[J].控制理论与应用,2016,33(4):466~478.[点击复制]
GU Bin-jie,PAN Feng.Accurate incremental online v-support vector regression learning algorithm[J].Control Theory and Technology,2016,33(4):466~478.[点击复制]
精确增量式在线v型支持向量回归机学习算法
Accurate incremental online v-support vector regression learning algorithm
摘要点击 2246  全文点击 1564  投稿时间:2015-04-15  修订日期:2015-11-26
查看全文  查看/发表评论  下载PDF阅读器
DOI编号  10.7641/CTA.2016.50303
  2016,33(4):466-478
中文关键词  在线学习  v型支持向量回归机  机器学习  学习算法  可行性分析  有限收敛性分析
英文关键词  online learning  v-support vector regression  machine learning  learning algorithms  feasibility analysis  finite convergence analysis
基金项目  国家自然科学基金资助项目(61273131, 61403168).
作者单位E-mail
顾斌杰 江南大学 轻工过程先进控制教育部重点实验室 gubinjie1980@126.com 
潘丰 江南大学 轻工过程先进控制教育部重点实验室  
中文摘要
      为了解决v型支持向量回归机(v-support vector regression,v -SVR)对偶问题的目标函数中增加的额外线性 项从而导致无法产生有效初始解的问题和在绝缘增量调整过程中可能存在的解路径不可行更新问题, 提出了精确 增量式在线v-SVR学习算法. 首先基于v-SVR的等价形式, 利用提前调整, 宽松的绝缘增量调整和精确的恢复调整 有效地解决了v-SVR对偶问题存在的上述问题. 然后分别对算法的可行性和有限收敛性进行了理论分析. 最后在四 个基准测试数据集上的仿真结果进一步验证了该算法的每一步调整都是可靠的, 经过有限次数调整最终收敛到最 小化问题的最优解, 而且与批处理学习算法相比, 随着训练样本的增加, 算法在缩短学习时间上的优势显著.
英文摘要
      In the v-support vector regression learning algorithm (v-SVR), to solve the two problems existing in the dual problem, i.e., the problem of unable to generate an effective initial solution due to the extra linear term introduced in the objective function and the problem of possible infeasible updating solution path in the adiabatic incremental adjustment process, we propose an accurate incremental online v-SVR learning algorithm. First, based on the equivalent formulation of v-SVR, the problems existing in the dual problem can be effectively solved by prior adjustments, relaxed adiabatic incremental adjustments and accurate restoration adjustments. Then, the feasibility and the finite convergence of the proposed algorithm are theoretically analyzed, respectively. Finally, the simulation results on four benchmark datasets further validate the reliability of each adjustment of the proposed algorithm, and the proposed algorithm will converge to the optimal solution of minimization problem within finite adjustments. Furthermore, the learning time of the proposed algorithm is much shorter than that of the batch learning algorithm when the number of training samples is increased.