Keynote Speakers

 

 

 

 

More information will be added soon...

 

 

 

 

Bing Liu, IEEE Fellow, ACM Fellow, AAAI Fellow
Prof. Dr., University of Illinois at Chicago (UIC), USA

Bing Liu is a chair professor at Peking University (on leave from the University of Illinois at Chicago). He received his Ph.D. in Artificial Intelligence (AI) from the University of Edinburgh. Before joining UIC, he was a faculty member at the School of Computing, National University of Singapore (NUS). His research interests include lifelong and continual learning, sentiment analysis, chatbots, open-world AI/learning, natural language processing (NLP), and data mining and machine learning. He has published extensively in top conferences and journals. He also authored four books: two on sentiment analysis, one on lifelong learning, and one on Web mining. Three of his papers received Test-of-Time awards: two from SIGKDD (ACM Special Interest Group on Knowledge Discovery and Data Mining) and one WSDM (ACM International Conference on Web Search and Data Mining). Another of his papers received Test-of-Time award - honorable mention also from WSDM. Some of his work has also been widely reported in the international press, including a front-page article in The New York Times. On professional services, he has served as the Chair of ACM SIGKDD from 2013-2017, as program chair of many leading data mining conferences, including KDD, ICDM, CIKM, WSDM, SDM, and PAKDD, as associate editor of leading journals such as TKDE, TWEB, DMKD and TKDD, and as area chair or senior PC member of numerous NLP, AI, Web, and data mining conferences. He is a recipient of ACM SIGKDD Innovation Award (the most prestigious technical award from SIGKDD), and he is also a Fellow of the ACM, AAAI, and IEEE.

Speech Title:  Learning on the Job in the Open World
Abstract:
In existing machine learning (ML) applications, once a model is built it is deployed to perform its intended task. During the application, the model is fixed due to the closed-world assumption of the classic ML paradigm – anything seen in testing/application must have been seen in training. However, many real-life environments - such as those for chatbots and self-driving cars - are full of unknown, which are called the open environments/worlds. We humans can deal with such environments comfortably - detecting unknowns and learning them continuously in the interaction with other humans and the environment to adapt to the new environment and to become more and more knowledgeable. In fact, we humans never stop learning. After formal education, we continue to learn on the job or while working. AI systems should have the same on-the-job learning capability. It is impossible for them to rely solely on manually labeled data and offline training to deal with the dynamic open world. This talk discusses this problem and presents some initial work in the context of natural language processing.



YAN Hong, IEEE Fellow
Prof. Dr., City University of Hong Kong

Hong Yan received his PhD degree from Yale University. He was Professor of Imaging Science at the University of Sydney and currently is Chair Professor of Computer Engineering and Wong Chung Hong Professor of Data Engineering at City University of Hong Kong. Professor Yan's research interests include image processing, pattern recognition, and bioinformatics. He has over 600 journal and conference publications in these areas. Professor Yan is an IEEE Fellow and IAPR Fellow. He received the 2016 Norbert Wiener Award from the IEEE SMC Society for contributions to image and biomolecular pattern recognition techniques. He is a member of the European Academy of Sciences and Arts.

Speech Title: Co-clustering for Detection and Analysis of Coherent Patterns in Multidimensional Big Data

Abstract:In real-world applications, multidimensional datasets can be very big in size, but they may contain much smaller meaningful patterns. In a large matrix, we can perform data classification in either feature or object direction based on traditional clustering algorithms. However, if a coherent pattern embedded in the data involves a subset of features and a subset of objects, then biclustering analysis is needed, which is often more complicated than clustering. The problem is even more challenging if the data dimensionality is large. For example, in gene expression data, we may be interested in extracting a subset of genes that co-express under a subset of conditions at a subset of time points. In this case, we need to analyze three-dimensional data arrays, or perform triclustering. Recently, we have discovered that a class of coherent patterns in multidimensional data can be represented as hyperplanes in singular vector spaces. By decomposing a data array into singular vector matrices, we can then deal with pattern coherence in individual directions. We have applied our coherent pattern detection algorithms successfully to genomic data analysis, disease diagnosis, drug therapeutic effect assessment, and human facial expression analysis. Our method can also be useful for many other real-world data analysis applications.



Tianrui Li
Prof. Dr., Southwest Jiaotong University, China
四川省云计算与智能技术高校重点实验室主任

Dr Tianrui Li is a Professor and the Director of the Key Lab of Cloud Computing and Intelligent Technique of Sichuan Province, Southwest Jiaotong University, China. Since 2000, he has co-edited 6 books, 11 special issues of international journals, received 16 Chinese invention patents and published over 360 research papers (e.g., AI, IEEE TKDE, IEEE TEC, IEEE TFS, IEEE TIFS, IEEE ASLP, IEEE TIE, IEEE TC, IEEE TVT) in refereed journals and conferences (e.g., ACL, IJCAI, KDD, UbiComp, WWW, ICDM, CIKM, EMNLP). 5 papers were ESI Hot Papers and 18 papers was ESI Highly Cited Papers. He serves as the area editor of International Journal of Computational Intelligence Systems (SCI), editor of Knowledge-based Systems (SCI) and Information Fusion (SCI), associate editor of ACM Transactions on Intelligent Systems and Technology, etc. He is an IRSS Fellow and Steering Committee Chair (2019-2020), IEEE CIS Emergent Technologies Technical Committee (ETTC) member (2019-2020), IEEE CIS Senior Members Committee member (2018-2020), a senior member of ACM and IEEE, ACM SIGKDD member, Chair of IEEE CIS Yibin Chapter (2013-2018) and Treasurer of ACM SIGKDD China Chapter. Over sixty graduate students (including 9 Post-Docs, 21 Doctors) have been trained. Their employment units include Microsoft Research Asia, Sichuan University, Huawei, JD, Baidu, Alibaba, and Tencent. They have received Best Papers/Dissertation Awards 20 times, Champion of Sina Weibo Interaction-prediction at Tianchi Big Data Competition (Bonus 200,000 RMB), Second Place of Social Influence Analysis Contest of IJCAI-2016 Competitions and Second Place of Weather forecast Contest of AI Challenger 2018.

Speech Title: Big Data Intelligence: Challenges and our Solutions
Abstract: Data-Driven Intelligence has become a hot research topic in the area of information science. This talk aims to outline the challengues on Data-Driven Intelligence. Then our solutions for Data-Driven Intelligence are provided, which cover the following aspects. 1) A hierarchical entropy-based approach is demonstrated to evaluate the effectiveness of data collection, the first step of Data-Driven Intelligence. 2) A multi-view-based method is illustrated for filling missing data, the preprocessing step for Data-Driven Intelligence. 3) A unified framework is outlined for Parallel Large-scale Feature Selection to manage Big Data with high dimension. 4) A MapReduce-based parallel method together with three parallel strategies are presented to compute rough set approximations for classification, which is a fundamental part in rough set-based data analysis similar to frequent pattern mining in association rules. 5) Incremental learning-based approaches are shown for updating approximations and knowledge in dynamic data environments, e.g., the variation of objects, attributes or attribute values, which improve the computational efficiency by using previously acquired learning results to facilitate knowledge maintenance without re-implementing the original data mining algorithm. 6) A deep-learning-based model to deal with multiple different sources of data is developed, etc.

 

 



Nobuo Funabiki
Okayama University, Japan

Nobuo Funabiki received the B.S. and Ph.D. degrees in mathematical engineering and information physics from the University of Tokyo, Japan, in 1984 and 1993, respectively. He received the M.S. degree in electrical engineering from Case Western Reserve University, USA, in 1991. From 1984 to 1994, he was with the System Engineering Division, Sumitomo Metal Industries, Ltd., Japan. In 1994, he joined the Department of Information and Computer Sciences at Osaka University, Japan, as an assistant professor, and became an associate professor in 1995. He stayed at University of California, Santa Barbara, in 2000-2001, as a visiting researcher. In 2001, he moved to the Department of Communication Network Engineering (currently, Department of Electrical and Communication Engineering) at Okayama University as a professor. His research interests include computer networks, optimization algorithms, educational technology, and Web technology. He is a member of IEEE, IEICE, and IPSJ. He has been the associate editor-in-chief in Journal of Communications since 2016. He was the chairman at IEEE Hiroshima section in 2015 and 2016.

Speech Title: Throughput Estimation Model for IEEE802.11n Link in Wireless Local-Area Network

Abstract: Nowadays, the IEEE 802.11 wireless local-area network (WLAN) has been deployed everywhere in the world as the inexpensive and flexible access network to the Internet. WLAN does not need a cable to connect a host or a mobile device with the access-point (AP). Thus, it has several advantages over the wired LAN in mobility, flexibility, and low-cost deployment/management.

One disadvantage of WLAN is the throughput performance change depending on the environment of the wireless communication link. Unlike the wired LAN, the throughput is reduced when the link distance between the transmitter and the receiver increases. If obstacles such as walls exist between them, it can be further reduced. Another is the throughput degradation due to interfering signals from other devices. Especially, it can be serious in a dense WLAN where a number of APs are allocated in the network field to cover the wide area and afford a lot of hosts.

To overcome these problems, the proper WLAN design is essential by optimizing the AP allocation in the network field, the channel assignment of the APs, and the host associations with them. Then, the accurate throughput estimation model is critical to optimize these optimization factors numerically, by repeating the estimations of the possible throughput under the given conditions

In this talk, I present our studies of the throughput estimation model for WLAN in Okayama University. Multinational students from Japan, Myanmar, Kenya, Indonesia, China, and Bangladesh have participated in the activities. I expect audiences of this talk to have interests in them and join our activities in near future.

First, I briefly review IEEE802.11 technologies for WLAN such as modulations and the CSMA/CA protocol. Then, I overview feature technologies for high-speed wireless communications including the multiple-input-multiple-output (MIMO), the frame aggregation, and the channel bonding (CB).

Second, I introduce the throughput estimation model for a single link when no interference is observed. This model consists of the log-distance path loss model to estimate the receiving signal strength (RSS) and the sigmoid function to estimate the throughput from RSS. The accuracy of the model has been confirmed through extensive experiments.

Third, I introduce the throughput drop estimation model for two interfering links. This model estimates how much the throughput is dropped due to the signal from one interfering link. Then, I extend the model for three or more interfering links. The accuracy of these models has also been confirmed through extensive experiments.

Finally, I discuss the application of the throughput estimation model for the channel assignment of the APs in WLAN. Both simulation and experiment results show that the channel assignment using the proposed model can increase the overall throughput when more channels partially interfering with each other are adopted.

 



Yunbo Rao
University of Electronic Science and Technology of China, China

Yunbo Rao received his B.S. degree and M.E. degree from Sichuan Normal University and University of Electronic Science and Technology of China in 2003 and 2006, respectively, and Ph.D. degree from University of Electronic Science and Technology of China,Yibin in 2012, both in School of Computer Science and Engineering (SCSE). His Ph.D advisor is Prof. LeiTing Chen. He has been as a visiting scholar of Electrical Engineering of the University of Washington from Oct 2009 to Oct 2011, Seattle, USA. and his supervisor is Prof.Mingting Sun. His research interests include video enhancement, computer vision, three-dimensional reconstruction,Virtual Reality,Augmented reality,and crowd animation. He also worked as research interns at Neusoft Inc.during 2004-2008.

Since May.2012,he joined School of Information and Software Engineering, University of Electronic Science and Technology of China. Currently, he is an associate professor at University of Electronic Science and Technology of China(UESTC).He is a supervisor of Ph.D student in Dec.2017.