当前位置:首页 > 学术交流 打印页面】【关闭
9月1日 上海交通大学刘卫东教授来我院线上讲座预告
( 来源:   发布日期:2020-09-14 阅读:次)

讲座题目:Variance Reduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference

主讲人:刘卫东教授 上海交通大学

讲座时间:202091日(周二)下午200-400

参与方式: 本场报告将通过腾讯会议举办,

会议 ID508 382 648

会议直播: https://meeting.tencent.com/l/C7OlU75b7abf

主讲人简介:

刘卫东,上海交通大学数学科学学院副院长,特聘教授,国家杰出青年科学基金获得者。2008年于浙江大学获博士学位,2008-2011年在香港科技大学、美国宾夕法尼亚大学沃顿商学院从事博士后研究工作。2010年获全国百篇优秀博士学位论文奖及由世界华人数学家大会颁发的新世界数学奖;2013年获得国家优秀青年科学基金;2016年获得国家万人计划青年拔尖人才;2018年获国家杰出青年科学基金。研究兴趣包括现代统计学、机器学习等,在统计学四大顶级期刊(AOS,JASA,JRSSB,Biometrika)和机器学习顶级期刊JMLR发表40余篇论文。

讲座。

讲座摘要:

This paper develops an efficient distributed inference algorithm,  which is robust against a moderate fraction of Byzantine nodes, namely arbitrary and possibly adversarial machines in a distributed learning system. In robust statistics, the median-of-means (MOM) has been a popular approach to hedge against Byzantine failures due to its ease of implementation and computational efficiency. However, the MOM estimator has the shortcoming in terms of statistical efficiency. The first main contribution of the paper is to propose a variance reduced median-of-means (VRMOM) estimator, which improves the statistical efficiency over the vanilla MOM estimator and is computationally as efficient as the MOM. Based on the proposed VRMOM estimator, we develop a general distributed inference algorithm that is robust against Byzantine failures.  Theoretically, our distributed algorithm achieves a fast convergence rate with only a constant number of rounds of communications. We also provide the asymptotic normality result for the purpose of statistical inference. To the best of our knowledge, this is the first normality result in the setting of Byzantine-robust distributed learning.  The simulation results are also presented to illustrate the effectiveness of our method.



上一条: 没有了
下一条: 没有了