The emergence of artificial intelligence that uses cameras to check for health and safety breaches in the workplace has raised concerns about a creeping culture of workplace surveillance and a lack of protections for workers.
- AI can use cameras to monitor workplaces for health and safety violations and hazards
- One company with Australian clients says blurring faces is among the measures taken to protect privacy
- Experts say Australian law is not up-to-date to sufficiently regulate the rising use of AI in the workplace
AI technology which uses CCTV cameras can be trained to identify breaches such as when a worker is not wearing gloves or a hard hat, or to identify hazards like spills.
One company, Intenseye, reports having multiple Australian customers for the new technology, including a major mining company.
But Nicholas Davis, professor of emerging technology at the University of Technology Sydney, said this latest use of AI raised questions about a creeping rise of a surveillance industry that relied on workers being watched constantly.
“Even though this is just one small example that can be justified under certain health and safety grounds — potentially can be justified — there are probably a million other use cases where similar technology can also be justified,” Professor Davis said.
The Office of the Australian Information Commissioner (OAIC) said it was aware of an increasing use of technology, including AI technology, to monitor workplace behaviour.
“Our office has received some enquiries and complaints in relation to workplace surveillance generally,” the OAIC said in a statement.
Company says workers are protected
Although artificial intelligence is already being used in Australian workplaces in many ways, pairing AI with CCTV is an emerging technology.
Intenseye uses cameras to monitor facilities and provide “real-time violation notifications”.
The company said its system blurred the faces of individuals to prevent retaliation for violations and to protect the privacy of workers.
Intenseye customer success manager David Lemon said there had been instances where customers had asked for faces to be unblurred, or other information that he said would be an invasion of privacy.
But he said the company would not provide that information.
He said there was rising demand for the technology, which could be trained to identify behaviour or breaches based on the specific concerns of the employer.
Alerts about violations appeared on a cloud-based digital platform, and Mr Lemon said the company had developed a new system that hid the “human” visual from video footage to provide only a “stick-figure” visual to the employer.
Mr Lemon said the company was aware of its obligations to protect the privacy of employees and sought legal advice to ensure it was complying with data and privacy laws in various countries.
He said the company complied with industry regulations and was audited by the AI Ethics Lab.
“This is cutting edge technology, it’s the frontier, it’s very new,” he said.
“Even customers who have a large appetite for computer vision do have some fears just because it’s change. It’s new. It can often be scary.”
Laws lag behind technology
Professor Davis, who studies the regulation of technology as it relates to human rights, said the emergence of this type of technology raised questions about consent, safety culture, and employer responsibility in the case of AI mistakes.
Although companies could take measures to ensure ethical use of AI, he said Australia’s surveillance laws were not equipped to effectively regulate the use of it or define what its limitations should be.
“It doesn’t anticipate things like breakthroughs in machine learning,” he said.
The Privacy Act 1988 is currently being reviewed by the federal government, with the emergence of AI technologies listed as one of the reasons behind the review.
The Act currently does not specifically address workplace surveillance, though it does require employers to give notice if they intended to collect personal information.
Professor Davis is part of a team at UTS, including former Human Rights Commissioner Ed Santow, working on a model law to regulate the use of facial recognition technology.
“There is a recognition or realisation that we do need much more dynamic, flexible, and appropriate fit-for purpose regulation for these kinds of technologies,” he said.
“I do think that employers increasingly have to be very rigorous and skeptical, and challenging about products that are marketed to them [where] it’s not quite clear how they work.
Cameras are here to stay
The Department of Industry, Science and Resources has developed an Artificial Intelligence Ethics Framework for businesses to test AI systems against a set of ethical principles.
But economist and director of the Centre of Future Work at the Australia Institute Jim Stanford said the lack of regulation was open to abuse and misuse.
“You have to have legal protection, you have to have enforcement, you have to have monitoring of the monitors,” he said.
Mr Stanford, who co-authored a report on electronic monitoring and surveillance in Australian workplaces, said employers must also consider the health and behavioural impacts of constantly being monitored.
“If people feel they’re being monitored all the time they’re going to do everything they can to try and keep the boss happy,” he said.
“That in and of itself can lead to speed up and intensification of work that is bad for health in the long run.”
Mr Stanford said he was not opposed to having video cameras in the workplace, and that the use of them was already widespread.
“The question is ‘how is that used? And what type of protections do people have?'” he said.
“And this is where Australia’s regulatory regime is badly, badly lagging the technology.