Keynotes

Keynote Speakers of ACM MM 2017

Achin Bhowmik
(SID Fellow)
CTO & EVP, Starkey, USA

Keynote Talk Title: Enhancing and Augmenting Human Perception with Artificial Intelligence

Time: 9:00-10:00, Tuesday, Oct. 24, 2017

Abstract:

In the recent years, there has been an astounding pace of advances in the areas of artificial sensing and intelligence technologies. These breakthrough developments are increasingly enabling us to create devices and systems that can sense and understand the world around them. In this keynote, we will present a synopsis of the current state-of-the-art results in the field of enhancing and augmenting the human sensation and perceptual processes with applications based on novel transduction devices and artificial intelligence technologies. The presentation will also highlight the emerging trends in these technologies as well as the associated business impact and opportunities.

Biography:

Dr. Achin Bhowmik is the chief technology officer and executive vice president of engineering at Starkey, the largest hearing technology company in the US, a privately held business with more than 5000 employees and operations in more than 100 countries worldwide. In this role, he is responsible for leading the company’s research and engineering efforts.

Prior to joining Starkey, Dr. Bhowmik was vice president and general manager of the Perceptual Computing Group at Intel Corporation. There, he was responsible for R&D, engineering, operations, and businesses in the areas of 3D sensing and interactive computing systems, computer vision and artificial intelligence, autonomous robots and drones, and immersive virtual and merged reality devices. Previously, he served as the chief of staff of the Personal Computing Group, Intel’s largest business unit.

Dr. Bhowmik holds adjunct and guest professor positions, advises graduate research and lectures on human-computer interactions and perceptual computing technologies at the Liquid Crystal Institute of the Kent State University, Kyung Hee University, Seoul, Indian Institute of Technology, Gandhinagar, Stanford University, and the University of California, Berkeley, where he is also on the board of advisors for the Fung Institute for Engineering Leadership.

Dr. Bhowmik was elected a Fellow of the Society for Information Display (SID). He received the Industrial Distinguished Leader Award from the Asia-Pacific Signal and Information Processing Association. He serves on the executive board for SID, and the board of directors for OpenCV. He has over 200 publications, including two books and 34 issued patents.

Bill Dally
(NAE member, ACM/IEEE/AAAS Fellow)
Senior Vice President and Chief Scientist, NVidia, USA

Keynote Talk Title: Efficient Methods and Hardwarefor Deep Learning

Time: 14:00-15:00, Tuesday, Oct. 24, 2017

Abstract:

The current resurgence of artificial intelligence is due to advances in deep learning. Systems based on deep learning now exceed human capability in speech recognition, object classification, and playing games like Go. Deep learning has been enabled by powerful, efficient computing hardware. The algorithms used have been around since the 1980s, but it has only been in the last few years – when powerful GPUs became available to train networks – that the technology has become practical. This talk will review the current state of deep learning and describe recent research on making these systems more efficient.

Biography:

Bill is Chief Scientist and Senior Vice President of Research at NVIDIA Corporation and a Professor (Research) and former chair of Computer Science at Stanford University. Bill and his group have developed system architecture, network architecture, signaling, routing, and synchronization technology that can be found in most large parallel computers today. While at Bell Labs Bill contributed to the BELLMAC32 microprocessor and designed the MARS hardware accelerator. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing Chip which pioneered wormhole routing and virtual-channel flow control. At the Massachusetts Institute of Technology his group built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanisms from programming models and demonstrated very low overhead synchronization and communication mechanisms. At Stanford University his group has developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations, the Merrimac supercomputer, which led to GPU computing, and the ELM low-power processor. Bill is a Member of the National Academy of Engineering, a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the American Academy of Arts and Sciences. He has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, the ACM Maurice Wilkes award, and the IPSJ FUNAI Achievement Award. He currently leads projects on computer architecture, network architecture, circuit design, and programming systems. He has published over 200 papers in these areas, holds over 100 issued patents, and is an author of the textbooks, Digital Design: A Systems Approach, Digital Systems Engineering, and Principles and Practices of Interconnection Networks.

Injong Rhee
CTO & EVP, Samsung Electronics, Korea

Keynote Talk Title: Building multi-modal interfaces for smartphones

Time: 9:00-10:00, Wednesday, Oct. 25, 2017

Abstract:

Smartphones are the central component of our modern, connected life. We carry them from the moment we wake up till the time we go to bed. Continuous technology innovation has created ever more sophisticated phones. Their small size belies a complexity that is largely unknown to the user – making it almost impossible to discover and use these expanded capabilities. Designers are forced to make tradeoffs between adding more capabilities and burying the functionality in menu hierarchies. Users are frustrated when forced to accept either of these choices. This is the fundamental limitation of modern touch based interfaces to smartphones.

On the path to solving this problem, it is natural to ask “How can we make it easier for users to learn and use the new features”. We believe this is the wrong question to ask and took a fundamentally different approach.

Dr. Injong Rhee will discuss this problem, its ramifications to multimedia and future outlook.

Biography:

Injong Rhee is a businessman, engineer, researcher and teacher. He is currently CTO, head of engineering – in charge of all software and services globally at Samsung mobile. He is frequently recognized for transforming the company into a significant player in software industry. He was most recently recognized as “the invisible hand of a great designer” behind Samsung Galaxy smartphone software and its user experience design that made “Samsung’s software caught-up to its hardware”.

As CTO, Injong is responsible for all aspects of software of all products of Samsung mobile including UX, Product Management, Development, Support and Updates. His most recent accomplishment is the software powering the flagship smartphones such as Galaxy S8 and Note 8, Gear smart watches and IoT services. During his six years with Samsung he spearheaded  several signature software service businesses from incubation to full blown businesses; most notably Samsung Knox, Samsung Pay and Bixby. Knox is Samsung’s fortified version of Android and a companion suite of enterprise management capabilities. Under his leadership Knox has grown from zero to over multibillion dollar business and is on-track to double again this year. Samsung Pay is the most commonly used mobile payment solution rivaling Apple Pay and Android Pay. It has been launched in 17 different markets around the globe. Bixby is an AI assistant integrated into the interfaces of all Samsung products including smartphones, smart TVs, consumer electronics and IoT devices. It is often praised for its ability for detailed device control. Injong continues on his mission to transform Samsung into a software and services powerhouse.

Prior to joining Samsung, Injong was a professor of computer science at North Carolina State University for 14 years. He is the celebrated inventor of BIC and CUBIC – default TCP congestion-control algorithms used in every Linux server and Android phone in the world.  He is a two-time winner of the prestigious IEEE William Bennett Award for his work in computer networks (2013, 2016). He has published over 100 journal and conference papers in the areas of distributed computing, computer networks and mobile computing. He received his PhD in Computer Science from the University of North Carolina at Chapel Hill.

Edward Y. Chang
(IEEE Fellow)
President, HTC, Taiwan

Keynote Talk Title: DeepQ: Advancing Healthcare Through AI and VR

Time: 14:00-15:00, Wednesday, Oct. 25, 2017

Abstract:

Quality, cost, and accessibility form an iron triangle that has prevented healthcare from achieving accelerated advancement in the last few decades. Improving any one of the three metrics may lead to degradation of the other two.  However, thanks to recent breakthroughs in artificial intelligence (AI) and virtual reality (VR), this iron triangle can finally be shattered. In this talk, I will share the experience of developing DeepQ, an AI platform for AI-assisted diagnosis and VR-facilitated surgery. I will present three healthcare initiatives we have undertaken since 2012: Healthbox, Tricorder, and VR surgery, and explain how AI and VR play pivotal roles in improving diagnosis accuracy and treatment effectiveness.  And more specifically, how we have dealt with not only big data analytics, but also small data learning, which is typical in the medical domain. The talk concludes with roadmaps and a list of open research issues in multimodal signal processing, fusion, and mining to achieve precision medicine and surgery.

Note: Our Healthbox (with Under Amour) and VR (VIVE and VivePaper) initiatives were awarded several top prizes at 2016/17 CES and MWC, whereas the Tricorder project was awarded 2nd place (out of 310) by the XPRIZE foundation with US$1,000,000.

Biography:

Edward Chang currently serves as the President of Research and Healthcare (DeepQ) at HTC. Ed’s most notable work is co-leading the DeepQ project (with Prof. CK Peng at Harvard), working with a team of physicians, scientists, and engineers to design and develop mobile wireless diagnostic instruments that can help consumers make their own reliable health diagnoses anywhere at anytime. The project entered the Tricorder XPRIZE competition in 2013 with 310 other entrants and was awarded second place in April 2017 with US$1M prize. DeepQ is powered by deep architecture to quest for cure. Similar deep architecture is also used to power Vivepaper, an AR product Ed’s team launched in 2016 to support immersive reading experience (for education, training, and entertainment).

Prior to his HTC post, Ed was a director of Google Research for 6.5 years, leading research and development in several areas including scalable machine learning, indoor localization, social networking and search integration, and Web search (spam fighting). His contributions in parallel machine learning algorithms and big-data mining are recognized through several keynote invitations and the developed open-source codes have been collectively downloaded over 30,000 times. His work on indoor localization with project X was deployed via Google Maps (see XINX paper and ASIST/ACM SIGIR/ICADL keynotes). Ed’s team also developed the Google Q&A system (codename Confucius), which was launched in over 60 countries.

Prior to Google, Ed was a full professor of Electrical Engineering at the University of California, Santa Barbara (UCSB). He joined UCSB in 1999 after receiving his PhD from Stanford University, and was tenured in 2003 and promoted to full professor in 2006. Ed has served on ACM (SIGMOD, KDD, MM, CIKM), VLDB, IEEE, WWW, and SIAM conference program committees, and co-chaired several conferences including MMM, MM, ICDE, WWW, and MOOC. He is a recipient of the NSF Career Award, IBM Faculty Partnership Award, and Google Innovation Award. He is a Fellow of IEEE for his contributions to scalable machine learning.

Scott Silver
Vice President, Google, USA

Keynote Talk Title: Bringing a Billion Hours to Life

Time: 9:00-10:00, Thursday, Oct. 26, 2017

Abstract:

YouTube recently announced the milestone of a 1 billion hours watched each day and 400 hours uploaded every minute. All of that consumes an enormous about of computation, storage and bandwidth. To make this all affordable and reliable we have carefully built upon decades of research and investment in compression, caching, network protocols and many other advancements. This talk will review where we are, how we got here, where we (think) we’re going.

Biography:

Scott is a VP of Engineering at Google, leading YouTube engineering. Previously he worked for 10 years in Ads, leading advertiser and publisher systems These include systems like AdWords, DoubleClick Bid Manager, DoubleClick For Publishers and AdSense. Scott joined Google in 2006. Prior to Google, Scott led the ordering system at Amazon.com for four Christmas holidays and three years. He also previous led the engineering team at i-drive, an Internet storage startup. Before that he worked at Connectix, Netscape and Apple.

Danny Lange
Vice President, Unity Technologies, USA

Keynote Talk Title: Bringing Gaming, VR, and AR to Life with Deep Learning

Time: 14:00-15:00, Thursday, Oct. 26, 2017

Abstract:

Game development is a complex and labor-intensive effort. Game environments, storylines, and character behaviors are carefully crafted requiring graphics artists, storytellers, and software to work in unison. Often games end up with a delicate mix of hard-wired behavior in the form of traditional code and somewhat more responsive behavior in the form of large collections of rules. Over the last few years, data intensive Machine Learning solutions have obliterated rule-based systems in the enterprise – think Amazon, Netflix, and Uber. At Unity we have explored the use of Deep Learning in content creation and Deep Reinforcement Learning in character development. We will share our learnings and the Unity APIs we use with the audience and hopefully inspire content developers to start using these new technologies to create digital experiences that are out of this world.

Biography:

Dr. Danny Lange is Vice President of AI and Machine Learning at Unity Technologies where he leads multiple initiatives in the field of applied Artificial Intelligence. Unity is the creator of a flexible and high-performance end-to-end development platform used to create rich interactive 2D, 3D, VR and AR experiences. Previously, Danny was Head of Machine Learning at Uber, where he led the efforts to build a highly scalable Machine Learning platform to support all parts of Uber’s business from the Uber App to self-driving cars. Before joining Uber, Danny was General Manager of Amazon Machine Learning providing internal teams with access to machine intelligence. He also launched an AWS product that offers Machine Learning as a Cloud Service to the public. Prior to Amazon, he was Principal Development Manager at Microsoft where he led a product team focused on large-scale Machine Learning for Big Data. Danny spent 8 years on Speech Recognition Systems, first as CTO of General Magic, Inc., and then as founder of his own company, Vocomo Software. During this time he was working on General Motor’s OnStar Virtual Advisor, one of the largest deployments of an intelligent personal assistant until Siri. Danny started his career as a Computer Scientist at IBM Research.

Danny holds MS and Ph.D. degrees in Computer Science from the Technical University of Denmark. He is a member of ACM and IEEE Computer Society and has numerous patents to his credit.