Professor Vijay Varadharajan Technical Distinctions


In this information age with large scale distributed heterogeneous technologies (ranging from fixed networks to wireless and mobile networks, fixed distributed applications and services to mobile software and applications to small devices and sensors to large scale cluster computing and large scale distributed databases and ever growing social networking applications), with each of these technologies having multiple platforms and many vendors providing different solutions, together with different types of users from individuals to small and medium enterprises to large corporations and governments, Vijay believes the key role of a technologist is essentially about providing information and services at the right time, at the right place and that are trustworthy and dependable and that adds value to the users, which enhances the quality of their lives.

Experience and Expertise

Vijay's research has been in the area of computing, systems and network security for more than 30 years. An important characteristic of his research is that it has involved a range of areas in Computer Science, Engineering, Information Systems and Business Applications. In some sense, the security technology has acted as a thread that has sewn these multi facets of Systems, IT and Engineering together, that is, Security in -- Networks, Operating Systems, Distributed Systems, Wireless and Mobile Systems, Middleware, Distributed Applications, Software Design, Models and Policies, as well as Business Applications in different industry segments such as Telecoms, Finance, Healthcare and Ecommerce as well as their deployment and their management. In fact, such unique expertise and experience is vital in the field of cyber security as securing the system as a whole is critically important and a system is as secure as its weakest link.

Technical Distinctions

1981-1984 - Secure PRESTEL Viewdata System

Vijay developed a secure file and communication system that allowed secure transfer of data using an Apple 2e machine over public switched telephone network, and also enabled storage of data in British Telecom server as "PRESTEL" pages. PRESTEL is a British Telecom system for storing data as "web" pages in the early 1980s. It predates world-wide-web as well as Minitel in France. Vijay's work was used by Royal Bank of Scotland to store bank statements of users in secure encrypted form. This was the first secure integrated communication and file storage system of its kind in 1982 using DES encryption and Diffie Hellman public key distribution scheme which allowed secure PRESTEL pages accessible over public networks. (Part of PhD Thesis sponsored by British Telecom Research Labs, U.K).

1981-1984 - Factorisation Based Trapdoor Cryptographic Systems

Vijay developed a theoretical foundational basis for factorization based trapdoor systems using ideal theory during 1981-1984. The RSA system, still today the most well-known factorization based trapdoor system, had been published in 1978/79. RSA's public key scheme works on the ring of integers and is based on the difficulty of factoring large integers. The theoretical framework Vijay developed is a basis for the design of a class of factorisation based trapdoor systems, including public key systems over other mathematical structures, such as matrix rings, polynomial rings and algebraic number fields, as well as the ring of integers of RSA. He derived constraints for which such systems will work and, from a practical point of view, at the time, suggested the matrix system as a useful one because it could apply to images. His comprehensive framework remains today the most complete piece of work in this area of factorization trapdoor. (Part of PhD Thesis: Techniques for Securing Computer Communications, Sponsored by British Telecom Research Labs, U.K, 1984)

Late 1980s and early 1990s - Formal Models for Evaluation of Secure Systems

In the mid to late 1980s, Vijay worked on the design of formal models (i) for networked secure systems and (ii) for network security protocols.

He proposed information flow security models which were based on Petri nets and then showed that such information flow security nets can be composed and refined. In principle, this approach could form the basis of the design of large scale secure systems, more specifically, one could build verifiable small components, then compose them into systems, so that by design we would have certain properties. He was the first to demonstrate that if (a) we were to specify the security of a system in terms of an information flow security property and (b) we were to model systems using Petri nets (which can be designed in practice), then for the specified security property, we can achieve secure composition and refinement. Vijay's demonstration used just one property and identified a class of systems that was big enough to be seen as useful. This was a key achievement at the time and attracted major attention in the security community, which was very much focussed on the evaluation of secure systems, notably with the development of criteria such as the DoD Orange Book. Nevertheless, Vijay recognised that achieving secure composition and refinement of large systems, while possible in principle, would not be easy at all in practice because of complexity and the level of abstraction used to describe security parameters. Even now, over 20 years later, we are not able to achieve this in general.

A further contribution in the front-line of that time, and well ahead of research in the security community, was the use of model-based checking for verification of security properties in protocols and systems. His early works in 1988/89 used linear time and branching time temporal logic based model checking in the verification of security properties of key distribution and authentication protocols. This approach re-emerged around 2006 and some early resurgent work, for example at the US Naval Research Lab (Cathy Meadows), Stanford (John Mitchell) and ETHZ (David Basin) can be traced back to the works in the late 1980s.

Late 1980s and early 1990s - Network Security

Secure LAN-MAN System and IEEE 802.10 Security Architecture. Vijay led a team at Hewlett-Packard Labs working on network security, including security of LANs, MANs, SMDS, Frame Relay and Broadband networks. This team was the first to develop a complete secure network system design for LAN-MAN based network architectures. They implemented an entire demonstration system on a LAN-MAN network between Bristol, UK and Palo Alto, USA. This work led to numerous and significant contributions to IEEE 802 Standards development in security. The group also later was the first to show the IEEE 802.10 Security Architecture in operation in practice.

Late 1980s and Early to Mid-1990s - Proxy and Propagation of Privileges and Open Software Foundation (OSF) Distributed Computing Environment (DCE)

Over the same period, Vijay also worked on distributed systems security, starting from secure RPC (Remote Procedure Call) and building to secure DCE. DCE is a model and toolset for developing client/server applications, produced by the OSF, a mainstream computer industry consortium, in the early 1990's. Vijay led HP Labs' Initiative on Distributed Systems Security and his work contributed to the development of the security architecture of and protocols in DCE. In particular, his work on secure proxy and delegation protocols were adopted by Hewlett-Packard as well as the OSF for its DCE offerings.

Early to Mid-1990s - Language-based Security Policy Approach, Distributed Authorization and Praesidium

In early 1990, Vijay and one of his team members, Philip Allen, pioneered work on a language based approach to specification of security policies at HP. This approach was widely adopted from the late 1990s and today remains the major approach for developing security policies. Vijay and his team extended this work to develop the theory and design of a distributed authorization model and system, which were then integrated with DCE. The complete Hewlett-Packard implementation of the authorization system, called Praesidium Authorization Server, had large scale deployment and was a very successful commercial product from the mid-1990s to the early 2000s, generating over a billion of dollars each year for 5 years.

In addition to leading the technical work, Vijay also championed the adoption of the work both inside and outside HP. In so doing he played a significant influencing role in the creation of a cross-division Early Adopter Program, and then worked with that program, which evolved into a full scale Division at Hewlett Packard, the Cooperative Computing Systems Division (CCSY). CCSY employed a team of more than 100 people on a permanent basis. Praesidium also received the Most Innovative Security Product Award in London in 2000 by SC Security Magazine.

Late 1990s to Mid-2000s - The Web Services Authorisation Architecture and Policy Extensions to XACML

Vijay's work on distributed authorization models and systems continued with authorization in distributed object systems and web services. Amongst the various works was a major piece of work is the design of distributed authorization architecture for web service based service oriented architectures (2002-2008). A significant outcome of this was the development of a web services authorization architecture (WSAA) layer on top of the web security layer in the service oriented architecture. The WSAA layer provided the flexibility for supporting multiple types of authorization policies and mechanisms. Vijay and his team integrated the WSAA layer with Microsoft's .NET environment and demonstrated a comprehensive solution to securing electronic patient records in a distributed system. This work produced additional outcomes including policy extensions to the XML Access Control Language (XACL), which have now been incorporated into XACML. The XACML (eXtensible Access Control Markup Language) standardised by the global consortium, OASIS (Organization for the Advancement of Structured Information Standards) defines a declarative access control policy language. Was awarded Microsoft Trustworthy Computing Award for this work.

Early to Mid-2000s - Distributed Denial of Service (DDoS) Countermeasures

In the early 2000s, Vijay focused on security attacks on networks and in particular distributed denial of service (DDoS) attacks. He and his team first developed techniques for counteracting DDoS attacks in the Internet. These techniques were implemented in 2003-2004 and were shown to be more efficient than other techniques known at the time for the Internet. The work received publicity in the popular press and subsequently was supported for further development over a few years by the Australian Defence Department and then by the National Science and Security Technology (NSST) program of the Australian Government Dept of Prime Minister and Cabinet. The techniques were further extended to wireless LANs, 3G networks and beyond-3G mobile networks. An important outcome was security architecture for detecting DDoS type attacks over heterogeneous networks involving wired, wireless and mobile networks. This work is being continued at present taking into account wireless sensor networks.

Early to Mid-2000s - Mobile Agents Security

Vijay's work on security enhanced mobile agent model and comprehensive security architecture for mobile agents addressed significant challenges in securing agent based Internet applications. This is where programs and data move between different systems and devices over different network infrastructures. This work was conceived when Vijay was working at Microsoft Research at Cambridge with well-known Prof Roger Needham from Cambridge University. Once again this work not only has been published in top rated venue (e.g. ACM CCS), but also led to two practical secure Internet based prototype applications, Internet Flight Finder and Electronic Auction.

Mid to Late 2000s - New Trust Model and Architecture for Mobile Ad hoc Networks (MANETs)

Over the period 2003-2009 Vijay addressed security challenges in MANET protocols and was successful in providing a basis for the delivery of a trustworthy and secure operational layer for MANETs. This work provided a comprehensive approach to MANET security in 3 dimensions detection/reaction dimension, enforcement dimension and prevention dimension. This research was supported by the Australian Dept of Defence over several years. The outcomes were a new trust enhanced security model and architecture for MANETs. The trust model took into account the dynamic nature of mobile nodes and used this trust model to determine when and how to do dynamic key management, recognising that there may not be any long-lived centralised trust authority. The new comprehensive security architecture for MANETs integrated trust, fellowship and secure routing to provide prevention and detection-reaction systems. A fully operational system based on the work was implemented. Deployment of MANETs is likely to play an increasing and vital role in networks of the future.

Late 1990s to date - Trust Enhanced Security

One of the major threads of Vijay's research since the 1980s has been design issues about trust and trusted authorities in trusted computing. Recognition of the standing of his work on authorities (certificate, credential management, decision and evaluation) led to his appointment from 1998-2000 as a member of the Advisory Board for the Trusted Computer Platform Alliance (TCPA), which formulated Trusted Platform Module (TPM) specifications. TPMs now have become more or less pervasive, being supported by over 300 companies, and are widely used today in computers (e.g. PCs) and other devices. In this trusted computing space, Vijay's focus since 2005 has been how to enhance secure decision making using notions of trust. He coined the phrase "trust enhanced security" to refer to the use of a hybrid trust evaluation to enhance the quality of secure decision making, such as an authorization decision or a routing decision or an electronic commerce transaction, or more generally any interaction decision. Hybrid trust combines both hard trust characteristics (such as certificates and credentials) and soft trust characteristics (such as reputation and social trust parameters). Vijay has elaborated several instances of hybrid trust models and has applied them in various information systems contexts, including in mobile agent application software, in mobile network ad hoc routing protocols, and in the base computing platform environment. He continues to advance research in this area of trust enhanced security.

Late 2000s to date - Dynamic Policy based Secure Virtualization

A major work that Vijay and his team have been doing over the recent years has been in secure virtualization systems. One important security challenge is how to achieve secure and dynamic sharing of virtual resources among co-operating virtual machines in different distributed physical platforms. His work developed a dynamic security policy based model and architecture that enforced access and information flow between multiple applications in virtual machines taking into account malware attacks in virtual machines and changes in trust of users and platforms. A unique feature of this system is the bringing together security, trust and virtualization in an integrated model able to respond to dynamically changing security attacks and trust by adapting security policies to increase resilience. The key is the feedback element from security attacks and trust changes which dynamically changes security policies thereby providing resilience; this is critical in future large scale cloud based systems. This research has been supported by the Dept of Prime Minister and Cabinet and the Dept of Defence.

2010 to date - Malware and Software Security

Vijay and his team has been working over the recent years on understanding in detail the advanced persistent threats (APTs) and how they are implemented including spread methods and evasion techniques. This work has led to development of some novel techniques of their own as to how to evade anti-virus and other security techniques as well as application of these techniques to smart grid environments.

  • A novel attack vector that targets anti-virus during updates was developed, and how the whole system and the anti-virus itself can be compromised during updates. This design vulnerability was demonstrated with several major anti-virus software products such as Avira, AVG, McAfee, Microsoft and Symantec.
  • This work then led on to a new Anti-Virus Parasitic Malware (AV-Parmware), which attacked protected components of anti-virus software by exploiting their security weaknesses, and compromised the target systems by being a parasite on the anti-virus. Such a parasitic anti-virus malware can cause significant damage and countermeasures to overcome such a malware were then created for existing major anti-virus software.
  • Then a new technique called feature-distributed malware has been developed that dynamically distributed the malware features to multiple software components in order to bypass various security mechanisms such as application white-listing and anti-virus' behavioural detection. This technique epitomizes the new trends in malware design. Effective defence mechanisms which prevent such malicious components were then proposed.
  • Another major trend in malware at present is the use of legitimately signed binaries sos they can run even when the code signing mechanism is active. Typically the attacker usually steals code signing private keys from small software vendors, or registers a paper company and has a code signing private key issued from a Certificate Authority. Vijay and his team has developed a new cross verification security mechanism incorporating same-origin-policy into the current code signing mechanism so that only software components from trusted vendors (i.e. signers) can be executed or loaded on the system. This change in the software development process alone can help to prevent several major attacks happening at the present time.

2010 to date - Secure Cloud Data Storage

When a user stores data in the cloud, the user has to rely on third parties to make decisions about the data and platform. It is critical to have appropriate security mechanisms that would prevent cloud providers from accessing user's data in a way that has not been agreed upon. It is also important that the user is able to specify a range of useful access policies which control who can access the users' data stored in the cloud. Vijay and his team have developed a novel policy based access control scheme for cloud data storage, called Role based Encryption (RBE), which enables the data owner to encrypt the data and store it in the cloud in such a way that only users who satisfy the role based access policies specified by the owner are able to decrypt the data. The cloud provider cannot decrypt the stored data in the cloud if the policy does not give access to the cloud provider, and hence trust on the cloud provider reduced. The access policies are specified using role based access control, as this is widely used in the real world by enterprises and users. The developed scheme integrates cryptography with role based access control to achieve efficient secure data storage in the cloud. A prototype has been developed using Amazon cloud services AWS to demonstrate the system practicality, performance and effectiveness. This work also obtained the prestigious Wilkes Award 2011 (named after Sir Maurice Wilkes, Cambridge University). The proposed scheme has been applied to protect electronic patient records, New Health Records, stored in public cloud in encrypted form and allowing role based access to encrypted health records for different healthcare professionals such as doctors, nurses and hospital administrators.

2010 to date - Security as a Service for Cloud

The focus of this work is on the security services that a cloud provider can offer as part of its infrastructure to its customers (tenants). It is a design of a security architecture that has the ability to provide "Security-as-a-Service model for a cloud"; this security as a service model the cloud provider can offer to its customers (tenants) as well as customers of its tenants. It is based on recognizing different cloud customers (tenants) having different security requirements. For instance, a tenant running financial services can require more security measures than a tenant providing basic web hosting. The proposed architecture specifies a default baseline set of security services for all customers (needed for maintaining the security of the ecosystem), while offering additional security services on top of the baseline for other customers requiring greater security. The proposed security architecture and services are currently undergoing evaluation. We believe such a Security-as-a-Service model for cloud is likely to be the trend in the future as cloud providers begin to adopt security and privacy as a differentiator to their offerings compared to their competitors.

2015 to date - Secure Software Defined Networks Security

Software Defined Networking (SDN) is rapidly emerging as a disruptive technology poised to change the worlds of "Networking" much the same was cloud computing is changing the "Compute" world.

Initially the focus of our work has been to develop a policy based security architecture for software defined networks (SDNs) in a multi-domain environment. Our security architecture enables us to specify and enforce different types of policies based on the devices, users, and their location as well as the path used for communications both within and between SDN domains. Our security system comprises of security modules and is developed as applications that can be run on any of the SDN Controllers. Currently we are implementing our security architecture using the POX controller and raspberry Pi switches, and we are able to demonstrate different use case scenarios with fine granular security policy enforcement for different SDN based end to end services.

Another major research focus is to analyse different types of attacks that can occur in the SDN data plane and develop security measures to counteract them.  Its overall objective is to develop a security architecture for SDN data plane. Currently our work involves the design of security techniques counteracting various attacks in the SDN data plane and the development of the corresponding security architecture.

The third major area of research is concerned with the issue of trust in SDN architectures. We are developing a hybrid trust model for SDN incorporating trusted computing and reputation based techniques. The significance of this work is that such a trust model can help to enhance the security of the Controller and more importantly improve the quality of security decisions in the SDN Controller, e.g. access policy and attack detection decisions, thereby helping to achieve greater assurance.

Finally we will integrate all these security components of policy based security management, attack detection and mitigation in the data plane as well as trust management to build a comprehensive security and trust architecture for Software Defined Networks (SDN).

2015 to date - Internet of Things (IoT) Security

There has been a rapid growth in the Internet of Things (IoT), with an ever-increasing number of physical devices being connected to the Internet at an unprecedented rate. Many IoT-based solutions are beginning to be deployed in different business sectors such as healthcare, smart homes and smart city infrastructures.  Such a heterogeneous distributed IoT infrastructure poses several significant security challenges such as the dramatic increase in the attack surface, secure management of large scale data generated by IoT devices, lightweight authentication and novel access control techniques to manage access to billions of IoT devices, often having limited resources.

Our research works focus on the following three main challenges in IoT security.

(1) A major challenge in the emerging IoT ecosystem is how to securely manage and control access to billions of things (devices), especially when many of these devices have limited computational resources. Currently there are no practical secure authorisation solutions that are efficient. There is a need for novel IoT security solutions that achieve fined-grained access control which are lightweight with efficient policy management and enforcement for billions of devices in a heterogeneous environment.

(2) Another fundamental challenge is in the area of trust in Internet of Things. The fundamental question is what it means to trust a device, and more specifically, what functionalities are required in a device to achieve trust, and how can trust on IoT devices be evaluated and managed in distributed IoT infrastructures. The issue of trust associated with a device is not just dependent on its identity but also on its behaviour over a period of time. It is necessary to take into account both static and dynamic characteristics of a device in the trust model. The trust issue becomes even more important with the devices being heterogeneous and mobile moving from one jurisdiction to another, where they are not known in advance.

(3) Another major challenge is the dramatic increase in the attack surface in the large scale IoT systems with new vulnerabilities and threats; many IoT devices do not have any security functionality, and even those that do are often primitive, and can easily be attacked. With the ever increasing number of connected devices, and the variety of devices, apps, services, and protocols, there is a clear need to identify which devices have been compromised, and what data or services are being accessed by these compromised devices. This is required to assess the impact and determine what actions to take proactively to protect against future security attacks.

On the authorisation side, we are developing a security architecture and mechanisms for privilege management and fine-grained access to large scale IoT systems and devices spanning multiple jurisdictions. Another major issue is how to ensure that user’s privacy requirements are satisfied when IoT data is accessed by users/services? Often data from IoT devices are collected by a cloud service for reasoning and use by applications in the cloud. This leads to a major concern of the lack of user control over data that is delivered by the IoT devices to the cloud. The aim of our security architecture is to achieve both these objectives in an efficient manner, by suitably distributing the security decisions and evaluations between cloud services and edge computing in distributed IoT infrastructures. On the trust side, our work involves some foundational research on the design of core functionalities required in an IoT device for it to be trusted, and the development of a trust model for IoT systems that can manage trust in a distributed IoT infrastructure. On the attack side, we will develop techniques to identify malicious IoT devices and security mechanisms to counteract attacks as well as explore new approaches to achieve containment of malicious activities of IoT devices in a large scale environment. The suite of models and services for -- (i) secure authorisation and privilege management, (ii) trust model and techniques together with (iii) new approaches to attack detection and containment of malicious devices in an IoT ecosystem -- will bring new insights and help to develop a multi-layered approach to security for large scale IoT systems, which do not exist today.

Our overall vision is one of enabling secure management of IoT devices, achieving secure access to large scale IoT devices and the data they generate in the provision of applications and services, as well as helping to create an ecosystem of trusted and resilient IoT devices in the future.

2015 to date – Machine Learning and Cyber Security

For many years now, there have been systems using machine learning that have been successfully deployed for fighting spam, fraud, and other malicious activities. These systems typically consist of a classifier that flags certain instances as malicious based on a fixed set of features.

Malware Analysis

We have proposed a novel scheme to detect malware which we call Malytics. Malytics consists of three stages: feature extraction, similarity measurement and classification. The three phases are implemented by a neural network with two hidden layers and an output layer. We show feature extraction, which is performed by tf-simhashing, is equivalent to the first layer of a particular neural network. We evaluate Malytics performance on both Android and Windows platforms. Malytics outperforms a wide range of learning-based techniques and also individual state-of-the-art models on both platforms. We also show Malytics is resilient and robust in addressing zero-day malware samples. The F1-score of Malytics is 97.21% and 99.45% on Android dex file and Windows PE files respectively, in the applied datasets.

Adversarial Machine Learning

Increasingly there is the related problem of security of machine learning systems themselves where there is an adversary. The adversary tries to design inputs which are misclassified by the machine learning. This is particularly important when we utilize machine learning algorithms for cyber security and safety sensitive applications such as malware detection, and image recognition in autonomous vehicles.

Recently, deep neural networks have been shown to lack robustness against specifically crafted adversarial samples. These adversarial samples are constructed by carefully perturbing normal samples in minor ways to deceive the machine learning models into misclassifications. Much of the current work in this area has been in the domain of image classification. Recently, there has also been an increasing interest in adversarial attacks on machine learning systems for malware detection, which is the main focus of our work in adversarial machine learning.

The aim of this research is to investigate the generation of adversarial examples and attacks on neural networks that are being used in malware classification, and develop techniques to counteract such adversarial attacks. We investigate evasion attacks where an adversary modifies original samples without changing their functionalities to misguide a target classifier; modified samples are called adversarial examples. We assume a black-box scenario where the adversary has access to only the feature set, and the final hard-decision output of the target classifier.

We have proposed a method to construct adversarial examples using the minimum description length (MDL) principle. In our method, we first create a dataset of samples all identified as benign samples by the target classifier. We next construct a code table of frequent patterns for the compression of samples in this dataset using the MDL principle. We finally generate an adversarial example for a malware sample by adding a pattern from the code table to the sample. This pattern is selected to minimize the length of the compressed adversarial example given the code table. This modification preserves the functionalities of the original malware sample as all original API calls are kept, and only some new API calls are added.

We have evaluated our method for the application of static malware detection in portable executable (PE) files. We consider API calls of PE files as their distinguishing features where the feature vector is a binary vector representing the presence or absence of API calls. We show that, using our method, we are able to increase the evasion rate from 8.16 to 78.24 percent against deep neural networks while keeping the functionalities of malware samples.

2018 to date – SDN, NFV and 5G Security

The key motivation behind this project proposal is that future networked systems must provide services to a vast array of applications and devices with competing and even conflicting requirements while simultaneously allowing flexible deployment. Security issues play a critical role in such networked systems. The main aim of this proposal is to develop a security architecture leveraging software define networks (SDN) and network function virtualization (NFV) enabling secure provision and management of services for different networked applications.

The basic network architecture that we consider consists of (i) VNF Orchestrator is responsible for creating and managing all VNF instances, (ii) System Manager responsible for creating and managing tenants and their network slices, (iii) Network Slice Controller managing network slices comprising collections of VNFs for each tenant, and (iv) virtual cloud infrastructures, each of which is managed by a Virtual Infrastructure Manager (VIM) residing in SDN Controller.

The focus of our current research work is on the design aspects and choices we need to consider in the development of a security architecture. These include aspects such as: secure isolation of network slices, authentication and authorization security services needed to achieve secure mapping of network slices (and VNFs) to host platforms of the infrastructure provider, different security requirements in different network slices as well as the broader issue of broader trust in the provisioning of authorize network slice spanning multiple physical networks under the jurisdiction of different providers.

2018 to date – Industrial Control Systems Security

Our research work in this area involves the design of security architecture and techniques to detect and prevent attacks in network control systems.

We have developed a new secure observer-based controller and a secure detection system. The system can detect attacks that arise due to change of output data of the plant or actuator signals. In addition, as all the data being transferred over the network are protected using encryption, the attacker is not able to obtain any real state estimations and control signals. Furthermore, our design provides privacy preserving properties as the observer and the controller operate on encrypted data. We have designed a residual observer-based attack detection scheme and used Paillier based cryptographic mechanisms to prevent the attacks. In addition, for an unstable system, we have used a state feedback controller with unknown input observer with quantized outputs for estimation and detection procedures. We have also shown that the close loop system is stable.

More generally our research work considers networked control systems composed of multiple dynamic entities and computational control units communicating over networks subjected to different types of attacks. In particular, some dynamic entities or control units are vulnerable to attacks and can become malicious. Our objective is to ensure that the input and output data of the benign entities are protected from the malicious entities as well as protected when they are transferred over the networks in a distributed environment. We have developed a methodology for the design of secure networked control systems integrating the cryptographic mechanisms with the control algorithms. The use of cryptographic mechanisms brings additional challenges to the design of controllers in the encrypted state space; the closed-loop system gains and states are required to match the specified cryptographic algorithms. This work has involved the design of private controllers for two typical control problems, namely stabilization and consensus, as well as numerical simulations and analysis of results.

Main Research Focus

Development of new Secure and Trusted Technologies and Applications in the areas of Cloud Computing, Big Data and Internet of Things as well as Cyber Physical Systems and Infrastructures