Security Challenges in Internet of Things (IoT).

Professor Sanjay K. Jha1

UNSW Sydney

In this talk, I will discuss how the community is converging towards the  IoT vision having worked in wireless sensor networking and Machine-2-Machine
(M2M) communication. This will follow a general discussion of security challenges in IoT. Finally I will discuss some results from my ongoing projects on security of bodywork devices and Secure IoT configuration management. Wireless bodyworn sensing devices are becoming popular for fitness, sports training and personalized healthcare applications. Securing the data generated by these devices is essential if they are to be integrated into the current health infrastructure and employed in medical applications. In this talk, I will discuss a mechanism to secure data provenance and location proof for these devices by exploiting symmetric spatio-temporal characteristics of the wireless link between two communicating parties. Our solution enables both parties to generate closely  matching `link’ fingerprints, which uniquely associate a data session with a wireless link such that a third party, at a later date, can verify the links the data was communicated on. These fingerprints are very hard for an eavesdropper to forge, lightweight compared to traditional provenance mechanisms, and allow for interesting security properties such as accountability and non-repudiation. I will present our solution with experiments using bodyworn devices in scenarios approximating actual device deployment. I will also touch upon other research on secure configuration management of IoT devices over wireless networks.


Biography

Professor Sanjay K. Jha is Director of the Cybersecurity and Privacy Laboratory (Cyspri) at UNSW. He currently UNSW lead and IoT Security Theme lead in the Cyber Security Cooperative Research Centre (CyberCRC) in Australia.

He also heads the Network Systems and Security Group (NetSys) at the School of Computer Science and Engineering at the University of New South Wales. His research activities cover a wide range of topics in networking including Network and Systems Security, Wireless Sensor Networks, Adhoc/Community wireless networks, Resilience and Multicasting in IP Networks. Sanjay has
published over 200 articles in high quality journals and conferences and graduated 27 Phd students. He is the principal author of the book Engineering Internet QoS and a co-editor of the book Wireless Sensor Networks: A Systems Perspective. He is an editor of the IEEE Trans. of Secure and Dependable Computing (TDSC) and served as an associate editor of the IEEE Transactions on Mobile Computing (TMC) and the ACM Computer Communication Review (CCR).

Hybrid Intelligence: Combining the Power of Human Computation and Machine Learning

Professor Fabio Casati1

1University of Trento

While machine learning has made amazing progress over the last decades and perhaps even more in recent years, there are still many practical problems that fall outside its reach.

The “classical” machine learning setup consists of a process where people label data to build a “gold” dataset, then a model is trained on it and used to make predictions or take decisions.

Hybrid intelligence extends this process by bringing together human computation and machine learning in many different ways to solve a given problem, often with a tighter coupling among the two.

In this talk I will present the concept of hybrid intelligence, discuss classes of problems that can be tackled with a hybrid approach, and present different processes that achieve solutions that are efficient from a cost perspective and that meet specified quality constraints.

One of the main end goals of this research thread – yet to be achieved – is to build a meta-algorithm that, for each given problem, identifies how to best leverage and combine human and machine computations.

We will see these approaches at work on a domain likely to be of interest to any scientist, that of identifying and summarizing scientific knowledge relevant for a given research problem.  In this context I will also show how a “sprinkle” of machine learning on top of human computation and analogously a sprinkle of crowdsourcing on top of ML algorithms goes a long way towards improving quality and cost.

Binary Correctness and Applications for OS Software

Thomas Sewell1

1Chalmers University, sewell@chalmers.se

 

Computer software is usually written in one language (the source language) and translated from there into the native binary language of the machine which will execute it. Most operating systems, for instance, are written in C and translated by a C compiler. If the correctness of the computer program is important, we must also consider the correctness of the translation from source to binary.

In the first part of this talk, I will introduce the SydTV tool which validates the translation of low-level software from C to binary. In the specific case of the seL4 verified OS software, this validation combines with the existing verification to produce a trusted binary.

The time taken to execute a program is normally considered to be a property of the final binary rather than the source code. Some programs have essential timing constraints, which ought to be checked by some kind of analysis. In the second part of this talk, I will show how the SydTV translation analysis can be reused to support a timing analysis on the seL4 binary. This design permits the timing analysis to make use of type information from the source language, as well as specific guidance provided at the source level by the kernel developers.

Context Recognition And Urban Intelligence From Mining Spatio-Temporal Sensor Data

Flora Salim1

1Computer Science and IT, School of Science, RMIT University, Melbourne, VIC, flora.salim@rmit.edu.au  

 

Context is the most influential signal in analysing human behaviours. Effective and efficient techniques for analyzing contexts inherent in the spatio-temporal sensor data from the urban environment are paramount, particularly in addressing these key growth areas in urbanization: human mobility, transportation, and energy consumption.  It is important to observe and learn the context from which the data is generated in, particularly when dealing with heterogenous high-dimensional data from buildings, cities, and urban areas.

One main challenge in spatio-temporal analytics is to discover meaningful correlations among the numerous sensor channels and other types of data from multiple domains. Often big data is not the problem, but sparse data is. High quality annotations required are often not available. Another major issue is the dynamic changes in the real-world, requiring a model robust to the fast-changing urban environment. I will present our generic temporal segmentation techniques that we have used for multiple applications. I will then present the applicability of some of our ensemble methods for multivariate and multi-target prediction in real-world cases, such as parking violation monitoring, predicting daily trajectories, visitor behaviour analysis, transport mode and activity recognition, and crime prediction. A new concept of cyber, physical, social contexts will be introduced, and how they translate in various domain applications of our research for smarter cities and smarter buildings, and intelligent assistants.

Cohesive Subgraph Computation: Models, Methods, and Applications

Wenjie Zhang

Many real applications naturally use graph to model the relationships of entities, including social networks, World Wide Web, collaboration networks, and biology. Many fundamental research problems have been extensively studied due to the proliferation of graph applications. Among them, cohesive subgraph computation, which identifies a group of highly connected vertices, has received great attention from research communities and commercial organizations. A cohesive subgraph is key to graph structure analysis and a variety of cohesive subgraph models have been proposed. In this talk, I will introduce popular models for cohesive subgraphs and discuss their applications. I will also cover a few recent works of mine in cohesive subgraph computation.

 


Biography

Wenjie Zhang is an Associate Professor in the School of Computer Science and Engineering, the University of New South Wales. She received her bachelor and master degrees from Harbin Institute of Technology in 2004 and 2006, and her Ph.D. degree from the University of New South Wales in 2010.  Wenjie’s research interests include graph, spatial and uncertain data management. Her work receives 4 best paper awards from international conferences. Wenjie’s research is supported by 4 ARC discovery projects and 1 ARC DECRA project. She is also involved in an industry project with HUAWEI on cohesive subgraph analysis. Her recent research focuses on algorithms, indexes, and systems in large scale graphs and their applications especially in social network analysis.  Wenjie is an Associate Editor for IEEE TKDE, an area chair for ICDE 2019 and CIKM 2015, and a PC member for more than 40 international conferences and workshops.

Discovering the potential of Australia’s first person-centric health data set

Vicki Bennett 

Summary:

The My Health Record is the first national person-centred digital health record in Australia. The Australian Institute of Health and Welfare (AIHW) has been appointed to facilitate access to this data for research and public health purposes, as approved by the yet to be established Data Governance Board. This presentation will cover the process being undertaken by the AIHW to make this data available in a secure way.

Biography: 

Vicki is currently the Head of the My Health Record Data Unit at the Australian Institute of Health and Welfare, where she has held a number of different roles over the past 12 years. She was also previously the Manager of the Information Strategy Section at Medicare Australia.

Vicki has a degree in Health Information Management, and a Masters in Health Informatics and has had a diverse career both domestically and internationally. She has worked extensively across the Pacific over the past 15 years as well as lectured at a range of Australian universities

Vicki has a passion for seeing health data used appropriately at all levels of the health system, and is looking forward to the challenges of making the My Health Record data available for good research and public health purposes.

The use of AI in health

Enrico Coiera

Professor, PhD, MBBS, FACMI, FACHI

Biography

Trained in medicine and with a computer science PhD in Artificial Intelligence (AI), Professor Coiera has a research background in both industry and academia and a strong international research reputation for his work on decision support and communication processes in biomedicine.

He founded the Centre for Health Informatics in 1999 at UNSW, and now based at Macquarie University it is Australia’s largest and longest running biomedical and health informatics academic research group. His textbook Guide to Health Informatics is in its 3rd edition, is widely used internationally, and is translated into several languages.

Research interests

Using digital health to solve health service delivery problems, patient safety informatics, consumer e-health, translational bioinformatics, evidence-based decision support, text summarisation methods to support scientific discovery, and clinical communication.

 

Granular Computing: At the Frontiers of Knowledge-Based Representation and Knowledge Processing

Witold Pedrycz1

1Department of Electrical & Computer Engineering, University of Alberta, Edmonton AB, T6R 2V4, Canada and Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
wpedrycz@ualberta.ca

 

In the plethora of rapidly progressing and advanced areas of information technology, it is anticipated that the resulting artefacts involved in system analysis and synthesis take into consideration available data and domain knowledge. They lead to the development of tangible and experimentally legitimized descriptors (concepts), and associations. Concepts constitute a concise manifestation of data and serve as a backbone of efficient user-centric processing. As being built at the higher level of abstraction than the data themselves, they capture the essence of data and usually emerge in the form of information granules. Processing information granules is carried out in the framework of Granular Computing (including a spectrum of formal frameworks of interval calculus, fuzzy sets, probabilities, and rough sets among others). We identify three main ways in which concepts are encountered and characterized: (i) numeric, (ii) symbolic, and (iii) granular. Each of these views comes with advantages and limitations. The views complement each other. The numeric concepts are built by engaging various clustering techniques. The quality of numeric concepts evaluated at the numeric level is described by a reconstruction criterion. The symbolic description of concepts, which is predominant in the realm of Artificial Intelligence (AI) and symbolic computing, can be represented by sequences of labels (integers). In such a way qualitative aspects of data are captured. This facilitates further qualitative analysis of concepts and constructs involving them by reflecting the bird’s-eye view at data and relationships. The granular concepts augment numeric concepts by bringing information granularity into the picture and invoking the principle of justifiable granularity in their development.