- build data security into their practices by design;
- where possible, de-identify personal information so that it cannot re-identified;
- consider data minimisation – i.e. limit the amount of data collected; and
- provide individuals with clear and prominent privacy notices, so they have some choice in relation to the personal information they provide.
2. Automation and artificial intelligence
Automated IT systems, and other automated technologies like drones and driverless cars, are another major source of big data. It is estimated that a single automated vehicle can produce as much data as 3,000 people surfing the Internet, while a small fleet of drones could create 150 terabytes of data (enough to fill the hard drives of 300 laptops) per day.4
As well as generating big data, such automated systems rely heavily on big data to operate. If there are deficiencies with the underlying data, this can have significant unintended results. A recent example is the failures of the automated fraud detection system used by the Department of Human Services (Centrelink).5 Without robust controls and checks, an automated system can generate information about individuals that is not correct – which can have serious consequences for those individuals and for the businesses which rely on the accuracy of the information. It could also breach the Australian privacy principle that requires businesses to take reasonable steps to only handle high-quality information.In order to build consumer confidence in automated systems, businesses should think carefully about how to deal with the data which is generated and used by such systems. To give some examples of the consumer concerns that have been (and will continue to be) raised in this regard:
- Systems can collect and analyse data about people’s tone, emotion and expression. This type of information is intensely personal, but would probably not be considered ‘sensitive information’ under Australia’s privacy laws. Will consumers be concerned if this information is treated with no greater sensitivity than their name or email address?
- IT vendors are increasingly combining the information of several businesses in a particular industry to produce artificial intelligence systems that can better understand individual preferences. Will consumers feel comfortable if data they provide to one business is combined in this way and shared across a broader industry?
- Automated vehicles continuously broadcast messages to other vehicles or to transport systems. This provides detailed information about a person’s movements to many different organisations. With no oversight body for the management of such data, will consumers be comfortable with what the movement of their vehicles is telling the wider world about them?
Road and traffic authorities are already working on some of the privacy issues with autonomous vehicles. As a part of this, Austroads has some suggestions to better protect privacy, including:
- the use of rotating pseudonyms to limit the identification of individual vehicles;
- requiring in-vehicle mechanisms (or alternative processes) that enable a short privacy collection notice to be provided to (and acknowledged by) the driver; and
- the adoption of an industry code of practice for autonomous vehicle manufacturers.6
Biometrics – such as fingerprints, retina and facial recognition – are used primarily for identification and authentication. Smartphones, for example, have used fingerprint recognition in place of passcodes as a security measure for many years. But this technology will become more important as transactions are increasingly automated and businesses move away from traditional, and simplistic, username-password security measures.Businesses who use, or are planning to use, biometrics for identification and authentication should be aware that biometric information is subject to a higher level of privacy protection than more traditional forms of identification. This is because biometric identifiers are considered to be ‘sensitive information’, which means that they:
- may only be collected with consent, except in specified circumstances (this is not required for other types of personal information);
- must not be used or disclosed for a secondary purpose unless the secondary purpose is directly related to the primary purpose for collection and within the individual’s reasonable expectations;
- cannot be used for the secondary purpose of direct marketing; and
- cannot be shared by related entities in the same way that they may share other personal information.
While these measures will provide some additional protection to biometric data, many people still express serious concerns about the collection of such data. Biometric data can protect against identity fraud, but the potential for misuse is clear. A stolen card can be replaced, but if the digital file of an iris pattern is swiped, the victim may be subjected to continuing fraud. There are also fears that such information may be misused – for example, facial recognition technology makes surveillance and tracking much easier, and DNA information could be used to influence medical insurance premiums. Meeting the higher standards afforded under privacy laws to ‘sensitive information’ may not be sufficient to allay these concerns, particularly given that privacy laws do not always keep up with the ways that new technologies deal with personal data or community expectations in response to these developments.Endnotes:
- Productivity Commission, ‘Data Availability and Use: Overview and Recommendations’, 2017.
- Forbes, ‘Roundup of Internet of Things Forecasts’, 2016.
- OAIC, ‘Privacy shortcomings of Internet of Things businesses revealed’, 2016.
- FTC Staff Report, ‘Internet of things: Privacy & Security in a Connected World’, 2015
- Bloomberg, ‘Here Comes the War for Commercial Drone Dominance’, 2017.
- ABC News, ‘We’re all talking about the Centrelink debt controversy, but what is ‘robodebt’ anyway?’, 2017.
- Austroads, ‘Privacy Impact Assessment (PIA) for Cooperative Intelligent Transport System (C-ITS) data messages’, 2017.