Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
While the concept of artificial intelligence (AI) has been around for some time, first emerging through logic and rule-based systems in the 1950s, it had received only intermittent attention when it piqued the interest of the mainstream press. For example, IBM’s “Deep Blue” beating chess champion Garry Kasparov in 1997; and the introduction of Apple’s “Siri”, the first mainstream AI-powered virtual assistant in 2011.
However, the recent emergence and open access of Gen AI via OpenAI’s release of ChatGPT-3.5 in November 2022, has exponentially increased public awareness of the potential of Gen AI to impact both business and our daily lives. The ability of applications such as ChatGPT to write text or code, compose music and create digital art has caught the public imagination, and that has translated into widespread experimentation and adoption, as well as a high level of business activity and expectation around the technology.
Of course, the development of Gen AI did not happen overnight. Rather, it was the result of huge investment and the commitment of resources by some of the largest and most well-funded companies in the world. For example, Microsoft announced its multi-billion dollar investment in OpenAI in January 2023, and in October 2024 OpenAI announced that it had raised US$6.6 billion of new funding at a post-money valuation of US$157 billion. At that valuation, OpenAI alone is worth more than 87% of the companies that make up the S&P 500, making it one of the highest valued private companies in the world.Microsoft has not been alone in making eyewatering investments. Amazon and Google have each invested billions in Anthropic, the developer of the “Claude” chatbot competitor to ChatGPT, and Google has separately made significant investments in Gemini, its AI search function. In April 2024, Meta announced it would invest US$35 to US$40 billion on AI development and Apple is following suit with its own significant investments in AI powered offerings.
Investments and growth in relation to AI hardware and infrastructure have been of similar scale. Nvidia has made significant investments in the development of advanced microchips that are critical to AI processing (and, as a result, has seen its market value surge by more than 1,000% over two years). There has also been an explosion of investment in data centres to support the high performance computing that is the “back office” for AI (although power constraints and environmental concerns are now growing issues of concern in this market). In that regard, Oracle has recently announced it is investing in small modular nuclear reactors to power new data centres given the AI demand outlook. Microsoft has also recently entered into a deal with Constellation Energy to restart the Three Mile Island nuclear plant in Pennsylvania to meet the surge in energy demand for data centres needed for AI use.
These investments resulted in remarkable growth in the value of these companies. In mid-2024, the “Magnificent 7” (Apple, Microsoft, Alphabet (Google), Amazon, Nvidia, Meta and Tesla) made up a combined 35.5% of the S&P 500 and earlier in 2024 the combined market capitalisation of these companies was roughly equal to the combined size of the stock markets of Canada, Japan and the UK. Analysts attributed much of the recent surge in performance of these stocks to the potential of AI.
Most commentators are predicting significant productivity gains across economies as a result of the development and adoption of AI. A June 2024 McKinsey report estimated that Gen AI could add the equivalent of US$2.6 trillion to US$4.4 trillion annually to the global economy.1
The increasing adoption and use of Gen AI has given rise to various concerns and potential legal challenges. These include:
These are a mixture of business, legal, moral and societal risks. Other risks will likely emerge as experience and usage grows.
In terms of copyright, the New York Times’ proceeding against OpenAI and Microsoft2 is seen as a significant test case of the constraints applicable to the development of Gen AI. The New York Times is arguing that OpenAI and Microsoft have used its copyrighted material to “train” the large language models incorporated into ChatGPT and Co-pilot, and that this use and reproduction is infringing copyright as it is unauthorised.3 OpenAI and Microsoft are arguing “fair use”, including that their Gen AI products do not serve as a market substitute for the New York Times’ copyrighted content.
Perplexity has also faced scrutiny following a claim made against it by News Corp, the parent company of the New York Post and Dow Jones (owner of The Wall Street Journal) that its search engines were content scraping without permission and engaging in massive-scale copying to train its AI search engine.4
Similar claims by other content creators have been filed. Notably, in September 2023, a group of prominent US authors, including Jonathan Franzen, John Grisham, George R.R. Martin and Jodi Picoult, through the Authors Guild, brought a class action claim against OpenAI alleging copyright infringement by using their works to train ChatGPT.5 Other similar claims have been brought against Meta Platforms6 and the AI image generator, Stability AI.7 The claimants in the latter proceeding recently succeeded in partially defending an application to strike out the claim, which will now continue to the discovery phase.
The outcome of these cases will have a significant impact on how large language models and other AI models are trained in the future and how Gen AI tools are used and, ultimately, how much it costs to use them.
Other publications and content creators have taken the commercial route and sought to make deals with the tech companies for compensation. For instance, in April 2024 Japanese-owned The Financial Times announced a strategic partnership and licensing agreement with OpenAI.
As we wrote in June 2023 in this article, the pace of AI’s development has meant that regulation of Gen AI is evolving rapidly around the globe, with different jurisdictions taking different approaches and moving at different speeds. The European Union and China are the only jurisdictions that have enacted comprehensive AI-specific legislation. Other jurisdictions currently rely on existing regulatory frameworks to govern and regulate the use of AI and either have AI-specific principles or guidelines in place, or are in the early stages of proposing AI-specific legislation. Some jurisdictions have passed or proposed legislation with purported extra-territorial effect.
Regulation is not straightforward as it needs to address the various risks referred to above, while being careful not to stifle the innovation and efficiency gains that AI has the potential to offer. How different jurisdictions address this balance over time will be interesting to watch.
This year, the AI Forum New Zealand, in conjunction with the Victoria University of Wellington and Callaghan Innovation, surveyed 232 New Zealand organisations to measure their use of, and views on, AI.8 The survey found that 67% of respondents reported using AI in their organisations, and 96% of respondents agreed that AI has made workers more efficient in their work.
In an Analytical Note released in July 2024, The New Zealand Treasury thought that, as an “advanced, high-skilled economy, New Zealand is likely to make more substantial short-term productivity gains from AI than less developed, lower-skilled economies”.9 However, The Treasury went on to caution that New Zealand could lag behind comparable jurisdictions in reaping the benefits of AI use as the uptake of advanced digital technologies and digital innovation tends to be slower in New Zealand compared to other jurisdictions. The Treasury attributed this potential slower uptake partly to lower levels of research and development investment by New Zealand firms.10
In August 2024, Microsoft estimated that Gen AI is expected to add NZ$76 billion to New Zealand’s annual GDP by 2038.11
There are currently no AI-specific laws in New Zealand. Rather, New Zealand relies on existing regulatory frameworks to govern the use and deployment of AI. These existing laws are supplemented by some non-mandatory principles issued by the Privacy Commissioner.
The New Zealand statutes relevant to the use and adoption of Gen AI are set out below. A wide range of other legal considerations may also be relevant depending on the particular context, including contractual terms, competition law, confidentiality and legal privilege, as well as industry-specific regulatory obligations.
In a paper to Cabinet from July 2024,12 the Office of the Minister of Science, Innovation & Employment recommended that rather than developing a “standalone AI Act”, existing frameworks could be updated as needed to balance the competing interests of enabling AI innovation and mitigating against AI risks. The Cabinet paper discouraged regulating AI based on “speculated harms” as doing so may harm productivity. The paper encouraged that a proportionate and risked based approach should be taken to regulating this technology to support the use of AI in New Zealand to boost innovation and productivity. This seems to signal that the New Zealand approach will be to adjust to the existing statutes above, rather than creating a new AI Act.
While these general purpose, principle-based laws can apply to the regulation of AI and its derivative products, it remains to be seen, how well this existing regulatory framework can cope with AI issues given their complexity.
The Australian Treasury recently released a discussion paper as part of a wider review into the impact of AI on consumer protection legislation.13 As discussed in our article on this topic here, the outcome of this review could potentially influence the regulatory response to AI in New Zealand, given the close parallels between Australian and New Zealand consumer laws.
With different approaches being taken in different jurisdictions, New Zealand has the advantage of being able to assess which of the various international approaches is most effective and fit-for-purpose in an economy of our size.
See Bell Gully’s AI team here.
Bell Gully is a foundation supporter of Newsroom.
[1] McKinsey Global Institute, “The economic potential of generative AI: The next productivity frontier”, June 2023.
[2] Filed in December 2023 in the US District Court for the Southern District of New York, New York Times Company v. Microsoft Corp., et al, Case No. 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023).
[3] Harvard Law Review, “NYT v. OpenAI: The Times’s About-Face”, 10 April 2024.
[4] The lawsuit was filed in October 2024 in the US District Court for the Southern District of New York, Dow Jones & Company, Inc. and NYP Holdings, Inc., v. Perplexity AI, Inc., Case No. 1:24-cv-7984 (S.D.N.Y. Oct. 21, 2024).
[5] Authors Guild, et al., v. Open AI Inc., et al., Case No. 1:23-cv-8292 (S.D.N.Y. Sept. 19, 2023).
[6] Christopher Farnsworth v. Meta Platforms, Inc., Case No. 3:24-cv-6893 (N.D. Cal. Oct. 1, 2024). See also, Reuters, “Meta hit with new author copyright lawsuit over AI training”, 3 October 2024, Blake Brittain.
[7] Illustrators, Sarah Andersen, Kelly McKernan, and Karla Ortiz sued Stability AI in US District Court in California in January 2023. See Sarah Andersen, et al., v. Stability AI Ltd., et al., Case No. 23-cv-00201-WHO (N.D. Cal. Jan. 13, 2023).[
8] AI Forum New Zealand, “New Zealand’s AI Productivity Report”, September 2024.
[9] New Zealand Treasury, Analytical Note: The impact of artificial intelligence – an economic analysis, Harry Nicholls and Udayan Mukherjee, July 2024.[10] 0.8% of GDP in 2019 compared to an OECD average of 1.8%.
[11] Accenture and Microsoft, “New Zealand’s Generative AI opportunity”, 21 August 2024.
[12] Ministry of Business, Innovation & Employment, Approach to work on Artificial Intelligence Cabinet Paper, 25 July 2024.
[13] Australian Treasury, “Review of AI and the Australian Consumer Law” Discussion Paper, October 2024.