Concerns are raised about new technologies that use AI, cameras to monitor workplace safety

The rise of artificial intelligence that uses cameras to check for health and safety violations in the workplace has raised concerns about a creeping workplace surveillance culture and a lack of protection for workers.

AI technology that uses CCTV cameras can be trained to identify violations such as when a worker is not wearing gloves or a hard hat, or to identify hazards such as spills.

One company, Intenseeye, reports having many Australian customers for the new technology, including large mining companies.

But Nicholas Davis, professor of emerging technologies at the University of Technology Sydney, said this latest use of AI raises questions about the growing surveillance industry that relies on workers being constantly supervised.

“While this is only one small example of what could be justified on certain health and safety grounds – potentially justifiable – there could be a million other use cases where similar technology could also be justified,” said Professor Davis.

The Australian Information Commissioner’s Office (OAIC) ​​said it was aware of the increasing use of technology, including AI technology, to monitor behavior in the workplace.

“Our office has received several inquiries and complaints regarding workplace oversight in general,” the OAIC said in a statement.

Company says workers are protected

While artificial intelligence is already being used in the Australian workplace in many ways, pairing AI with CCTV is an emerging technology.

Intenseeye uses cameras to monitor facilities and provide “real-time breach notifications”.

The company says its system obscures individual faces to prevent retaliation for violations and to protect workers’ privacy.

Load

Intenseye’s customer success manager, David Lemon, said there were instances where customers requested that faces not be blurred, or other information he thought would violate privacy.

But he said the company would not provide that information.

He said there was increasing demand for technology, which can be trained to identify behavior or violations based on specific concerns from employers.

A warning about the breach appeared on a cloud-based digital platform, and Lemon said the company had developed a new system that hides “human” visuals from video footage in order to provide companies with only “stick image” visuals.

Mr Lemon said the company is aware of its obligation to protect employee privacy and is seeking legal advice to ensure it complies with data and privacy laws in various countries.

He said the company complies with industry regulations and is audited by the AI ​​Ethics Lab.

“It’s cutting-edge technology, it’s frontier, it’s very new,” he said.

“Even customers with a passion for computer vision have some fears just because of change. This is new. Often scary.”

Law lags behind technology

Professor Davis, who studies technology regulation related to human rights, said the emergence of this type of technology raises questions about consent, safety culture and employer responsibility in cases of AI errors.

While companies can take steps to ensure ethical use of AI, she said Australian surveillance laws are not equipped to effectively regulate its use or define what limits should be.

#Concerns #raised #technologies #cameras #monitor #workplace #safety

Comments

Popular posts from this blog

Keary opens up about battle concussion after 'nervous' return, revealing teammates preparing to rest