Yixin Chen | |
Birth Date: | 1979 6, df=y |
Occupation: | Computer scientist, academic, and author |
Awards: | Fellow, Institute of Electrical and Electronics Engineers Fellow, Asia-Pacific Artificial Intelligence Association |
Education: | B.Sc. computer science M.Sc. computer science Ph.D. computer science |
Workplaces: | Washington University in St. Louis |
Yixin Chen is a computer scientist, academic, and author. He is a professor of computer science and engineering at Washington University in St. Louis.[1]
Chen's research interests are focused on computer sciences, with a particular focus on the fields of machine learning, deep learning, and data mining.[2] He has contributed to several publications and has written several book chapters, including Clustering Parallel Data Streams and The Evaluation of Partitioned Temporal Planning Problems in Discrete Space and its Application in ASPEN.[3] He also co-authored the book Introduction to Explainable Artificial Intelligence.
Chen is an elected IEEE Fellow[4] for his contributions towards deep learning systems and an AAIA Fellow. He also served as a Program Co-chair for IEEE Conference on Big Data 2021.[5]
Chen completed his Bachelor's in Computer Science from the University of Science and Technology of China in 1999 and Master's in Computer Science from the University of Illinois at Urbana-Champaign in 2001. He then pursued his Ph.D. in computer science from the University of Illinois at Urbana-Champaign under the guidance of Benjamin Wah[6] and completed it in 2005.[7]
Chen started his academic career as an assistant professor at the Department of Computer Science and Engineering at Washington University in St. Louis in 2005. In 2010, he was appointed as an associate professor at the Department of Computer Science and Engineering at Washington University in St. Louis. As of 2016, he is a professor at the Department of Computer Science and Engineering at Washington University in St. Louis.[8] He is the Director of the Center for Collaborative Human-AI Learning and Operation (HALO) at Washington University.[9]
Chen has authored numerous publications. His research interests are focused in the fields of machine learning, applications of artificial intelligence in healthcare, optimization algorithms, data mining, and computational biomedicine.[2]
Chen has done significant research on compactness and applicability of deep neural networks (DNNs). He proposed the concept and architecture of lightweight DNNs. His group invented the HashedNets architecture, which compresses prohibitively large DNNs into much smaller networks using a weight-sharing scheme.[10]
Chen also developed a compression frameworks for convolutional neural networks (CNNs). His lab invented a frequency-sensitive compression technique in which more important model parameters are better preserved, leading to state-of-the-art compression results.[11]
Chen has made significant contributions to graph neural networks (GNNs). Chen and his students proposed DGCNN, one of the first graph convolution techniques that can learn a meaningful tensor representation from arbitrary graphs, and showed its deep connection to the Weisfeiler-Lehman algorithm.[12] They are the first to apply GNNs to link prediction (in the well-known SEAL algorithm) and matrix completion and achieved world record results.[13]
For time series classification, Chen advocated using a multi-scale convolutional neuronal network, also known as MCNN, citing its computational efficiency. He illustrated that MCNN brings out features at varying frequencies and scales by leveraging GPU computing, contrary to other frameworks that can only retract features at a single-time-scale.[14]