Keynote 1 (Tuesday April 9th)
Abstract: Recent advances in hardware and software technologies have enabled us to re-think how we architect databases to meet the demands of today's information systems. However, this makes existing performance evaluation metrics obsolete. In this paper, I describe SAP HANA a novel, powerful database platform that leverages the availability of large main memory and massively parallel processors. Based on this, I propose a new, multi-dimensional performance metric that better reflects the value expected from today’s complex information systems.
Bio: Dr. Vishal Sikka is a member of the Executive Board of SAP AG, heading technology and innovation for the company. Sikka has responsibility for technology and platform products, including database, especially the industry breakthrough in-memory database SAP HANA, as well as analytics, mobile, application platform, and middleware. He drives emerging technologies and advanced development for the next-generation technology platform, applications, and tools. He also oversees key technology partnerships, customer co-innovation, and incubation of emerging businesses. He has global responsibility for SAP Research, as well as academic and government relations.
Sikka has been Chief Technology Officer of SAP since 2007, responsible for the overall technology, architecture, and product standards across the entire SAP product portfolio. He is the creator of the concept of “timeless software,” which underpins SAP architecture and innovation strategy.
Sikka holds a Doctorate in Computer Science from Stanford University in California, and his experience includes research in Artificial Intelligence, Programming Models and Automatic Programming, as well as Information Management and Integration – at Stanford, at Xerox Palo Alto Labs, and as founder of two startup companies.
Keynote 2 (Wednesday April 10th)
Abstract: The World-Wide Web contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that can encourage publishing more data sets from governments and other public organizations and support new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets. For the first time since the emergence of the Web, structured data is being used widely by search engines and is being collected via a concerted effort.
I will describe some of the efforts we are conducting at Google to collect structured data, filter the high-quality content, and serve it to our users. These efforts include providing Google Fusion Tables, a service for easily ingesting, visualizing and integrating data, mining the Web for high-quality HTML tables, and contributing these data assets to Google's other services.
Bio: Alon Halevy heads the Structured Data Management Research group at Google. Prior to that, he was a professor of Computer Science at the University of Washington in Seattle, where he founded the database group. In 1999, Dr. Halevy co-founded Nimble Technology, one of the first companies in the Enterprise Information Integration space, and in 2004, Dr. Halevy founded Transformic, a company that created search engines for the deep web, and was acquired by Google. Dr. Halevy is a Fellow of the Association for Computing Machinery, received the the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2000, and was a Sloan Fellow (1999-2000). He received his Ph.D in Computer Science from Stanford University in 1993 and his Bachelors from the Hebrew University in Jerusalem. Halevy is also a coffee culturalist and published the book "The Infinite Emotions of Coffee", published in 2011 and a co-author of the book "Principles of Data Integration", published in 2012.
Keynote 3 (Thursday April 11th)
Abstract: Until relatively recently, the development of data processing applications took place largely ignoring the underlying hardware. Only in niche applications (supercomputing, embedded systems) or in special software (operating systems, database internals, language runtimes) did (some) programmers had to pay attention to the actual hardware where the software would run. In most cases, working atop the abstractions provided by either the operating system or by system libraries was good enough. The constant improvements in processor speed did the rest. The new millennium has radically changed the picture. Driven by multiple needs –e.g., scale, physical constraints, energy limitations, virtualization, business models-- hardware architectures are changing at a speed and in ways that current development practices for data processing cannot accommodate. From now on, software will have to be built paying close attention to the underlying hardware and following strict performance engineering principles. In this talk, several aspects of the ongoing hardware revolution and its impact on data processing are analyzed, pointing to the need for new strategies to tackle the challenges ahead.
Gustavo Alonso is a professor at the Department of Computer Science at
ETH Zurich in Switzerland, where he has been since 1995. At
ETHZ, he is part of the Systems Group and the Enterprise Computing
Center. Gustavo has a degree in electrical engineering from the Madrid
Technical University in Spain and an M.S. and Ph.D. in Computer Science
from UC Santa Barbara. Before joining ETH, he worked at the IBM Almaden
Research Center. Gustavo's research interests encompass almost all
aspects of systems, from design to run time. Most of his research these
days is related to multi-core architectures, large clusters, FPGAs, and
cloud computing, with an emphasis on adapting traditional system
software (OS, database, middleware) to these new hardware platforms.
Gustavo is a Fellow of the ACM and Senior Member of the IEEE. He has been awarded the AOSD 2012 Most Influential Paper Award, the VLDB 2010 Ten Year Best Paper Award, and the ICDCS 2009 Best Paper Award for work on Remote Direct Memory Access. He has served in the VLDB Endowment, the ACM/IFIP/IEEE Middleware Steering Committe, as an associate editor of the VLDB Journal, as Chair of EuroSys , and as general chair or PC-chair/vice-chair in numerous conferences (VLDB, ICDE, Middleware, BPM, ICDCS, IEEE MDM).