Dentons US LLP

11/07/2024 | News release | Distributed by Public on 11/08/2024 05:17

Artificial Intelligence in Construction

November 7, 2024

As artificial intelligence technologies such as generative AI and machine learning continue to develop and products integrating these technologies appear in the market, the pressure to adopt them is increasing. Whether the hype around AI will be justified over the long term, there is no question that it is being used, and that its use is likely to increase over time. The construction industry is no exception to this trend. AI technologies are being investigated and used in a wide variety of areas, from the work site, with the use of autonomous technologies (including drones), to the office, where generative AI tools are helping workers to write and manage documents. But as promising as these technologies are, care needs to be taken. There are a number of legal and ethical challenges that need to be considered before using AI tools.

Who is responsible for what AI does?

One big concern is just who is responsible when things go wrong. As AI systems tackle increasingly sophisticated tasks, the breadth of the AI risk domain expands. For example, if a drone hits something or someone and causes damage or injury, who can be held responsible. Is it the manufacturer that should be held responsible? The operator? The risk of a "blame game" is high. But even beyond that, uncertainty about who is responsible creates the potential for these risks not to be properly managed if everyone assumes that it's someone else's job.

In New Zealand, the Health and Safety at Work Act 2015 creates another reason to be cautious about using AI-based technology anywhere that there is the potential to cause risks to health and safety. The Act puts a large part of the responsibility for ensuring that worksites are safe on to you, the contractor. You should accordingly be wary about adopting any technology if you aren't confident you understand what risks it poses and how they can be managed.

Who owns the data that goes in - or comes out?

Another set of issues arises around data. There are privacy, confidentiality, and intellectual property concerns with the use of any AI tools. Do you have the right to use AI tools with the data and documents you have? Do you have all the rights you need to the outputs of those tools? Then there is the matter of privacy and confidentiality. There are potential legal and ethical risks associated with providing sensitive information to AI tools - especially given that the developers of most public AI tools are open about the fact that they use the information you provide to them as training data, which can end up being regurgitated by the AI tool to other users later - outside your organisation.

How reliable is the AI - and the companies behind it?

A big AI buzzword at the moment is "hallucination". This refers to when generative AI tools basically make things up. The general awareness of this phenomenon is increasing: ChatGPT now includes a label at the bottom of each page that says "ChatGPT can make mistakes. Check important info" and Microsoft CoPilot says "AI-generated content may be incorrect" after every response. You need to be particularly careful in New Zealand given the prevalence of overseas sources online - where these AI tools' training data largely comes from. There has long been a risk of looking something up on the web and getting incorrect information, but it is easier to mitigate with web searches because you can check where the results are coming from: how authoritative is the source? Is it from New Zealand? That isn't easy with artificial intelligence: even ensuring that your prompt contains references to New Zealand doesn't guarantee that the results will refer to New Zealand rules or be appropriate to New Zealand conditions. The result is that, whatever you do with generative AI, the results need to be carefully checked by a human being.

There is also a commercial risk to relying on AI tools at this early stage in their development: it is likely that many of them will disappear. With any emerging technology, there will be more misses than hits. You may want to think twice before making your business reliant on cutting-edge technology from brand new startups.

The ethical concerns with AI

One of the big problems with artificial intelligence systems today is that they are largely opaque. Nobody, not even AI experts, can really tell why they give the answers that they do. "Explainable AI" is an open research problem in computer science. This raises an issue: should you use AI systems to analyse or make decisions if you can't explain the analysis or why those decisions were made? For example, can an engineer legitimately use AI tools to help to resolve extension of time and variation claims if the engineer cannot explain to the principal or the contractor why the final result is the way that it is? For example, it is difficult to ensure that AI tools are taking into account only the relevant factors and not irrelevant factors - like the names of the individuals involved. There are documented cases of AI tools giving different scores to CVs that are identical except for the names of the individuals on them. Even putting aside those extreme examples, the working and reasoning behind a report or decision is often as important as the final result. It doesn't matter how quickly AI can be used to produce a report or make a decision if it can't produce convincing reasons for its conclusions.

Conclusion

AI will undoubtedly have a significant impact on the construction industry. But it is important to stay measured and to be careful in adopting technology you do not understand, made by companies you do not recognise, using data the provenance of which you do not know.

This article was written by Brendan Cash, Partner, and Miles Rout, Law Graduate in Dentons' Major Projects and Construction team. This article was originally published by Contractor Mag.