Microarchitectural security: a pipeline with a pipe dream

Yuval Yarom

Over the past decades, processors have become increasingly more intricate. This added complexity results in an ever widening gap between program execution and the model programmers use when developing software.  In this talk we explore some of the security implications of this discrepancy, and describe several security vulnerabilities that stem from it.

 

Practical Post-Quantum Zero-Knowledge Proofs and Private Cryptocurrencies

Dr. Ron Steinfeld

Monash University 

We discuss new techniques for design and analysis of efficient lattice-based zero-knowledge proofs (ZKP). First, we review previous work, and then introduce our recent work on one-shot proof techniques for non-linear polynomial relations, where the protocol achieves a negligible soundness error in a single execution, and thus performs significantly better in both computation and communication compared to prior protocols requiring multiple repetitions. To illustrate the utility of our techniques, we explain how to use them to build efficient relaxed proofs for important relations, such as one-out-of-many proofs. Despite their relaxed nature, we further show how our proof systems can be used as building blocks for advanced cryptographic tools such as ring signatures. Our ring signature achieves a dramatic improvement in length over all the previous proposals from lattices at the same security level. We then discuss an extension of our techniques to construct a practical lattice-based privacy-preserving blockchain cryptocurrency protocol called MatRiCT.


Biography

Ron Steinfeld received his Ph.D. degree in Computer Science in 2003 from Monash University, Australia. Since 2015, he is a Senior Lecturer at the Faculty of Information Technology, Monash University, Australia. Following his Ph.D. Ron worked as a postdoctoral research fellow in cryptography and information security at Macquarie University, Australia, holding the positions of Macquarie University Research Fellow in cryptography and information security (2007-2009), and ARC Australian Research Fellow in cryptography and information security (2009- 2012). Ron completed his ARC Research Fellowship at Monash University (2012-2014). His main research interests are in the design and analysis of cryptographic algorithms and protocols, in particular in the areas of post-quantum cryptography and secure computation protocols. He has over 18 years of research experience in cryptography and information security. He has published more than 60 research papers in international refereed conferences and journals, more than 10 of which have each been cited over 100 times. He received the ASIACRYPT 2015 best paper award. He has served on the technical Program Committee of numerous international conferences in cryptography, is an editorial board member of the journal `Designs Codes and Cryptography’, and consults in cryptography design for the software industry.

Towards High-Quality Big Data for Responsible Data Science

Divesh Srivastava

Data are being generated, collected and analyzed today at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. As the use of big data has grown, so too have concerns that poor quality data, prevalent in large data sets, can have serious adverse consequences on data-driven decision making. Responsible data science thus requires a recognition of the importance of veracity, the fourth “V” of big data. In this talk, we present a vision of high-quality big data and highlight the substantial challenges that the first three “V”s, volume, velocity and variety, bring to dealing with veracity in big data. Due to the volume and velocity of data, one needs to understand and possibly repair poor quality data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself.” We conclude with some recent results relevant to big data quality that are cause for optimism.


Biography

Divesh Srivastava is the Head of Database Research at AT&T Labs-Research. He is a Fellow of the Association for Computing Machinery (ACM), the Vice President of the VLDB Endowment, on the Board of Directors of the Computing Research Association (CRA), on the ACM Publications Board and an associate editor of the ACM Transactions on Data Science (TDS). He has served as the managing editor of the Proceedings of the VLDB Endowment (PVLDB), as associate editor of the ACM Transactions on Database Systems (TODS), and as associate Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (TKDE). He has presented keynote talks at several international conferences, and his research interests and publications span a variety of topics in data management. He received his Ph.D. from the University of Wisconsin, Madison, USA, and his Bachelor of Technology from the Indian Institute of Technology, Bombay, India.

Machine Learning for Data Streams

Albert Bifet

Big Data and the Internet of Things (IoT) have the potential to fundamentally shift the way we interact with our surroundings. The challenge of deriving insights from the Internet of Things (IoT) has been recognized as one of the most exciting and key opportunities for both academia and industry. Advanced analysis of big data streams from sensors and devices is bound to become a key area of data mining research as the number of applications requiring such processing increases. Dealing with the evolution over time of such data streams,i.e., with concepts that drift or change completely, is one of the core issues in stream mining. In this talk, I will present an overview of data stream mining, and I will introduce some popular open source tools for data stream mining.


Biography

Albert Bifet is Professor at University of Waikato, and Institut Polytechnique de Paris. Previously he worked at Huawei Noah’s Ark Lab in Hong Kong, Yahoo Labs in Barcelona, and UPC BarcelonaTech. He is the co-author of a book on Machine Learning from Data Streams published at MIT Press. He is one of the leaders of MOA, scikit-multiflow and Apache SAMOA software environments for implementing algorithms and running experiments for online learning from evolving data streams. He was serving as Co-Chair of the Industrial track of IEEE MDM 2016, ECML PKDD 2015, and as Co-Chair of KDD BigMine (2019-2012), and ACM SAC Data Streams Track (2019-2012).

Security Challenges in Internet of Things (IoT).

Professor Sanjay K. Jha1

UNSW Sydney

In this talk, I will discuss how the community is converging towards the  IoT vision having worked in wireless sensor networking and Machine-2-Machine
(M2M) communication. This will follow a general discussion of security challenges in IoT. Finally I will discuss some results from my ongoing projects on security of bodywork devices and Secure IoT configuration management. Wireless bodyworn sensing devices are becoming popular for fitness, sports training and personalized healthcare applications. Securing the data generated by these devices is essential if they are to be integrated into the current health infrastructure and employed in medical applications. In this talk, I will discuss a mechanism to secure data provenance and location proof for these devices by exploiting symmetric spatio-temporal characteristics of the wireless link between two communicating parties. Our solution enables both parties to generate closely  matching `link’ fingerprints, which uniquely associate a data session with a wireless link such that a third party, at a later date, can verify the links the data was communicated on. These fingerprints are very hard for an eavesdropper to forge, lightweight compared to traditional provenance mechanisms, and allow for interesting security properties such as accountability and non-repudiation. I will present our solution with experiments using bodyworn devices in scenarios approximating actual device deployment. I will also touch upon other research on secure configuration management of IoT devices over wireless networks.


Biography

Professor Sanjay K. Jha is Director of the Cybersecurity and Privacy Laboratory (Cyspri) at UNSW. He currently UNSW lead and IoT Security Theme lead in the Cyber Security Cooperative Research Centre (CyberCRC) in Australia.

He also heads the Network Systems and Security Group (NetSys) at the School of Computer Science and Engineering at the University of New South Wales. His research activities cover a wide range of topics in networking including Network and Systems Security, Wireless Sensor Networks, Adhoc/Community wireless networks, Resilience and Multicasting in IP Networks. Sanjay has
published over 200 articles in high quality journals and conferences and graduated 27 Phd students. He is the principal author of the book Engineering Internet QoS and a co-editor of the book Wireless Sensor Networks: A Systems Perspective. He is an editor of the IEEE Trans. of Secure and Dependable Computing (TDSC) and served as an associate editor of the IEEE Transactions on Mobile Computing (TMC) and the ACM Computer Communication Review (CCR).

Hybrid Intelligence: Combining the Power of Human Computation and Machine Learning

Professor Fabio Casati1

1University of Trento

While machine learning has made amazing progress over the last decades and perhaps even more in recent years, there are still many practical problems that fall outside its reach.

The “classical” machine learning setup consists of a process where people label data to build a “gold” dataset, then a model is trained on it and used to make predictions or take decisions.

Hybrid intelligence extends this process by bringing together human computation and machine learning in many different ways to solve a given problem, often with a tighter coupling among the two.

In this talk I will present the concept of hybrid intelligence, discuss classes of problems that can be tackled with a hybrid approach, and present different processes that achieve solutions that are efficient from a cost perspective and that meet specified quality constraints.

One of the main end goals of this research thread – yet to be achieved – is to build a meta-algorithm that, for each given problem, identifies how to best leverage and combine human and machine computations.

We will see these approaches at work on a domain likely to be of interest to any scientist, that of identifying and summarizing scientific knowledge relevant for a given research problem.  In this context I will also show how a “sprinkle” of machine learning on top of human computation and analogously a sprinkle of crowdsourcing on top of ML algorithms goes a long way towards improving quality and cost.

Binary Correctness and Applications for OS Software

Thomas Sewell1

1Chalmers University, sewell@chalmers.se

 

Computer software is usually written in one language (the source language) and translated from there into the native binary language of the machine which will execute it. Most operating systems, for instance, are written in C and translated by a C compiler. If the correctness of the computer program is important, we must also consider the correctness of the translation from source to binary.

In the first part of this talk, I will introduce the SydTV tool which validates the translation of low-level software from C to binary. In the specific case of the seL4 verified OS software, this validation combines with the existing verification to produce a trusted binary.

The time taken to execute a program is normally considered to be a property of the final binary rather than the source code. Some programs have essential timing constraints, which ought to be checked by some kind of analysis. In the second part of this talk, I will show how the SydTV translation analysis can be reused to support a timing analysis on the seL4 binary. This design permits the timing analysis to make use of type information from the source language, as well as specific guidance provided at the source level by the kernel developers.

Context Recognition And Urban Intelligence From Mining Spatio-Temporal Sensor Data

Flora Salim1

1Computer Science and IT, School of Science, RMIT University, Melbourne, VIC, flora.salim@rmit.edu.au  

 

Context is the most influential signal in analysing human behaviours. Effective and efficient techniques for analyzing contexts inherent in the spatio-temporal sensor data from the urban environment are paramount, particularly in addressing these key growth areas in urbanization: human mobility, transportation, and energy consumption.  It is important to observe and learn the context from which the data is generated in, particularly when dealing with heterogenous high-dimensional data from buildings, cities, and urban areas.

One main challenge in spatio-temporal analytics is to discover meaningful correlations among the numerous sensor channels and other types of data from multiple domains. Often big data is not the problem, but sparse data is. High quality annotations required are often not available. Another major issue is the dynamic changes in the real-world, requiring a model robust to the fast-changing urban environment. I will present our generic temporal segmentation techniques that we have used for multiple applications. I will then present the applicability of some of our ensemble methods for multivariate and multi-target prediction in real-world cases, such as parking violation monitoring, predicting daily trajectories, visitor behaviour analysis, transport mode and activity recognition, and crime prediction. A new concept of cyber, physical, social contexts will be introduced, and how they translate in various domain applications of our research for smarter cities and smarter buildings, and intelligent assistants.

Cohesive Subgraph Computation: Models, Methods, and Applications

Wenjie Zhang

Many real applications naturally use graph to model the relationships of entities, including social networks, World Wide Web, collaboration networks, and biology. Many fundamental research problems have been extensively studied due to the proliferation of graph applications. Among them, cohesive subgraph computation, which identifies a group of highly connected vertices, has received great attention from research communities and commercial organizations. A cohesive subgraph is key to graph structure analysis and a variety of cohesive subgraph models have been proposed. In this talk, I will introduce popular models for cohesive subgraphs and discuss their applications. I will also cover a few recent works of mine in cohesive subgraph computation.

 


Biography

Wenjie Zhang is an Associate Professor in the School of Computer Science and Engineering, the University of New South Wales. She received her bachelor and master degrees from Harbin Institute of Technology in 2004 and 2006, and her Ph.D. degree from the University of New South Wales in 2010.  Wenjie’s research interests include graph, spatial and uncertain data management. Her work receives 4 best paper awards from international conferences. Wenjie’s research is supported by 4 ARC discovery projects and 1 ARC DECRA project. She is also involved in an industry project with HUAWEI on cohesive subgraph analysis. Her recent research focuses on algorithms, indexes, and systems in large scale graphs and their applications especially in social network analysis.  Wenjie is an Associate Editor for IEEE TKDE, an area chair for ICDE 2019 and CIKM 2015, and a PC member for more than 40 international conferences and workshops.

Discovering the potential of Australia’s first person-centric health data set

Vicki Bennett 

Summary:

The My Health Record is the first national person-centred digital health record in Australia. The Australian Institute of Health and Welfare (AIHW) has been appointed to facilitate access to this data for research and public health purposes, as approved by the yet to be established Data Governance Board. This presentation will cover the process being undertaken by the AIHW to make this data available in a secure way.

Biography: 

Vicki is currently the Head of the My Health Record Data Unit at the Australian Institute of Health and Welfare, where she has held a number of different roles over the past 12 years. She was also previously the Manager of the Information Strategy Section at Medicare Australia.

Vicki has a degree in Health Information Management, and a Masters in Health Informatics and has had a diverse career both domestically and internationally. She has worked extensively across the Pacific over the past 15 years as well as lectured at a range of Australian universities

Vicki has a passion for seeing health data used appropriately at all levels of the health system, and is looking forward to the challenges of making the My Health Record data available for good research and public health purposes.

12