讲座题目:VarianceReduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference
主讲人:刘卫东 上海交通大学
讲座时间:2020年9月1日(周二)下午2:00―4:00
参与方式:腾讯会议会议 ID:508 382 648;
会议直播网址: https://meeting.tencent.com/l/C7OlU75b7abf
主讲人简介:
刘卫东,上海交通大学数学科学学院副院长,特聘教授,国家级高层次青年人才。2008年于浙江大学获博士学位,2008-2011年在香港科技大学、美国宾夕法尼亚大学沃顿商学院从事博士后研究工作。2010年获全国百篇优秀博士学位论文奖及由世界华人数学家大会颁发的新世界数学奖;2013年获得国家级青年人才基金资助;2016年获得国家级高层次领军人才;2018年获国家级高层次青年人才基金资助。研究兴趣包括现代统计学、机器学习等,在统计学四大顶级期刊(AOS,JASA,JRSSB,Biometrika)和机器学习顶级期刊JMLR发表40余篇论文。
讲座摘要:
This paper develops anefficient distributed inference algorithm, which is robust against amoderate fraction of Byzantine nodes, namely arbitrary and possibly adversarialmachines in a distributed learning system. In robust statistics, themedian-of-means (MOM) has been a popular approach to hedge against Byzantinefailures due to its ease of implementation and computational efficiency.However, the MOM estimator has the shortcoming in terms of statisticalefficiency. The first main contribution of the paper is to propose a variancereduced median-of-means (VRMOM) estimator, which improves the statisticalefficiency over the vanilla MOM estimator and is computationally as efficientas the MOM. Based on the proposed VRMOM estimator, we develop a generaldistributed inference algorithm that is robust against Byzantine failures. Theoretically, our distributed algorithm achieves a fast convergence ratewith only a constant number of rounds of communications. We also provide theasymptotic normality result for the purpose of statistical inference. To thebest of our knowledge, this is the first normality result in the setting ofByzantine-robust distributed learning. The simulation results arealso presented to illustrate the effectiveness of our method.