There is a lot of noise around using artificial intelligence (AI) in the energy industry. Bennett Jones' Sébastien Gittens helped make sense of it on Day Three of the World Petroleum Congress in Calgary. He spoke with conference attendees about how AI is being used right now and what energy companies should consider when onboarding it.
How AI is Being Used in the Energy Industry
Regardless of where an energy company may be along its energy transition journey, AI can be used in a number of ways, including:
- to improve inventory demand planning;
- avoid wastage of products and raw materials;
- decrease machinery idle time;
- optimize production processes in real-time; and
- identify anomalies in facilities using machine vision.
Some of the largest energy companies in the world are currently using AI in their existing operations. For example:
- Shell uses AI in predictive maintenance and to improve the efficiency and lower the emissions of its fleet of LNG tankers;
- Saudi Aramco is applying AI to optimize reservoir management, flare monitoring and power consumption; and
- Chevron uses AI to inform its exploration and drilling decisions, and to advance how the company captures and permanently stores carbon deep underground.
Where AI Regulation Now Stands in Canada
- There are many competing definitions and frameworks and a consensus is not yet established. The proposed federal Artificial Intelligence and Data Act (AIDA) defines AI as "a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions." Other jurisdictions define it differently.
- AIDA is now at its second reading in the House of Commons. It was introduced as part of Bill C-27 and if the Bill receives Royal Assent, a consultation process for AI regulation will be initiated. It is envisioned that there would be a period of at least two years after Bill C-27 receives Royal Assent before AIDA comes into force.
- AIDA will apply to "high-impact AI systems," but what exactly this includes is not clearly defined by AIDA. A Companion Document for AIDA provides some examples of "high-impact AI systems" as:
- screening systems impacting access to services;
- biometric systems used for identification and inference;
- systems that can influence human behaviour at scale; and
- systems critical to health and safety.
- As AIDA progresses, the Canadian government has: (i) commenced consultations on its proposed Code of Practice; and (ii) published a Guide on the use of Generative AI. The Guide is intended to: (i) provide preliminary guidance to federal institutions on their use of generative AI tools; and (ii) facilitate compliance with existing laws and policies. The Code, however, is intended to be implemented on a voluntary basis by Canadian firms ahead of the coming into force of AIDA.
- Existing laws continue to apply, including in the areas of privacy, intellectual property and human rights. Standards, codes and rules from applicable regulators should also be considered.
Key Considerations and Risks with AI in Energy
While not exhaustive, some of the things that energy companies should consider when using AI include:
- AI is a tool. It does not allow users to abdicate their responsibilities.
- Governance regarding the adoption/implementation of AI. Organizations should adopt practical policies, practices and procedures with respect to the adoption of AI that incorporate various considerations to identify, assess and mitigate risk. Such consideration include, among numerous others, the following:
- Does the AI solution meet your business requirements? Since there is a lot of noise around AI, companies need to: (i) diligently assess if adopting a particular AI solution will actually meet their needs; and (ii) ensure that there is a business case for adoption (that reflects costs, timing, etc.).
- What are the design inputs and outcomes? For example, what are the sources of data used by the AI solution? Does the organization have all necessary rights and permissions to use such data? What is the quality, integrity and reliability of the output? Can the output be validated independently? What's the likelihood of the AI solution "hallucinating" or perpetuating biases? Will there be human oversight and monitoring of the AI solution?
- What are the ramifications of errors? Depending on the use case, there can be significant consequences (including legal liability, loss of opportunity, loss of goodwill/reputational and brand risk) by adopting a particular AI solution.
- How is the AI being delivered? For example, is the solution being delivered "on premises" or in the cloud? Is the solution sufficiently resilient and reliable? Is the AI solution secure?
- How transparent is the AI? Can the organization explain the AI solution? Does the organization understand the strengths and limitations of the AI solution?
- Governance regarding the use of AI. Organizations should adopt practical policies, practices and procedures with respect to the use of AI by employees and others.
- No one size fits all. Parameters with respect to the responsible use of AI will be different for every organization.
- Regulatory requirements are going to change from jurisdiction to jurisdiction around the world. Countries are at different stages of the policy continuum. It will be important to monitor regulatory and legal developments.
On the fourth and final day of WPC 2023, one of Bennett Jones' Knowledge Connect sessions will look at biofuels, RNG and natural gas innovation.