Concerns are raised about new technologies that use AI, cameras to monitor workplace safety
The rise of artificial intelligence that uses cameras to check for health and safety violations in the workplace has raised concerns about a creeping workplace surveillance culture and a lack of protection for workers.
Key points:
- AI can use cameras to monitor workplaces for health and safety violations and hazards
- One company with an Australian client said that blurring of the face was one of the steps taken to protect privacy
- Experts say Australian laws are not up-to-date to adequately regulate the increasing use of AI in the workplace
AI technology that uses CCTV cameras can be trained to identify violations such as when a worker is not wearing gloves or a hard hat, or to identify hazards such as spills.
One company, Intenseeye, reports having many Australian customers for the new technology, including large mining companies.
But Nicholas Davis, professor of emerging technologies at the University of Technology Sydney, said this latest use of AI raises questions about the growing surveillance industry that relies on workers being constantly supervised.
“While this is only one small example of what could be justified on certain health and safety grounds – potentially justifiable – there could be a million other use cases where similar technology could also be justified,” said Professor Davis.
The Australian Information Commissioner’s Office (OAIC) said it was aware of the increasing use of technology, including AI technology, to monitor behavior in the workplace.
“Our office has received several inquiries and complaints regarding workplace oversight in general,” the OAIC said in a statement.
Company says workers are protected
While artificial intelligence is already being used in the Australian workplace in many ways, pairing AI with CCTV is an emerging technology.
Intenseeye uses cameras to monitor facilities and provide “real-time breach notifications”.
The company says its system obscures individual faces to prevent retaliation for violations and to protect workers’ privacy.
Load
Intenseye’s customer success manager, David Lemon, said there were instances where customers requested that faces not be blurred, or other information he thought would violate privacy.
But he said the company would not provide that information.
He said there was increasing demand for technology, which can be trained to identify behavior or violations based on specific concerns from employers.
A warning about the breach appeared on a cloud-based digital platform, and Lemon said the company had developed a new system that hides “human” visuals from video footage in order to provide companies with only “stick image” visuals.
Mr Lemon said the company is aware of its obligation to protect employee privacy and is seeking legal advice to ensure it complies with data and privacy laws in various countries.
He said the company complies with industry regulations and is audited by the AI Ethics Lab.
“It’s cutting-edge technology, it’s frontier, it’s very new,” he said.
“Even customers with a passion for computer vision have some fears just because of change. This is new. Often scary.”
Law lags behind technology
Professor Davis, who studies technology regulation related to human rights, said the emergence of this type of technology raises questions about consent, safety culture and employer responsibility in cases of AI errors.
While companies can take steps to ensure ethical use of AI, she said Australian surveillance laws are not equipped to effectively regulate its use or define what limits should be.
“It didn’t anticipate things like breakthroughs in machine learning,” he said.
The Privacy Act of 1988 is currently under review by the federal government, with the advent of AI technology listed as one of the reasons behind the review.
Current legislation does not specifically address workplace supervision, although it does require employers to provide notice if they intend to collect personal information.
Professor Davis is part of a team at UTS, including former Human Rights Commissioner Ed Santow, who is working on model law to regulate the use of facial recognition technology.
“There is an acknowledgment or awareness that we need more dynamic, flexible, and purposeful regulation for this type of technology,” he said.
“I think entrepreneurs are increasingly having to be very strict and skeptical, and challenging about the products that are marketed to them [where] not very clear how it works.
The camera is here to stay
The Department of Industry, Science and Resources has developed an Artificial Intelligence Ethics Framework for businesses to test AI systems against a set of ethical principles.
But economist and director of the Center for Future Work at the Australian Institute Jim Stanford said the lack of regulation was open to abuse and abuse.
“There must be legal protection, there must be law enforcement, there must be supervision from observers,” he said.
Stanford, who co-authored a report on electronic monitoring and surveillance in Australia’s workplaces, said employers should also consider the health and behavioral impacts of continuous monitoring.
“If people feel they are being watched all the time, they will do everything they can to try and make the boss happy,” he said.
“That in itself can lead to accelerated and intensified work which is bad for health in the long run.”
Mr Stanford said he was not against having video cameras in the workplace, and that their use was widespread.
“The question is ‘how is it used? And what kind of protection do people have?'” he said.
“And this is where Australia’s regulatory regime is so bad, so lagging behind in technology.
#Concerns #raised #technologies #cameras #monitor #workplace #safety
Comments
Post a Comment