Categories: GeneralLegal Opinion

The Legal, Ethical issues And impact of Artificial Intelligence on Legal Profession: Which Way Nigeria? -By Aminu Hassan

NAME: AMINU HASSAN
email: aminuhassan582@gmail.com
PHONE NO.: 08136280920

Introduction

Artificial Intelligence (AI) has revolutionized various industries, and the legal profession is no exception. With its ability to analyze vast amounts of data, detect patterns, and make informed decisions, AI is reshaping the way legal practitioners work. This essay explores the profound impact of AI on the legal profession, examining both its positive contributions and potential challenges, as well as the way forward for Nigeria.

One of the primary areas where AI has made substantial inroads is legal research. AI-powered tools can analyze vast amounts of legal documents, statutes, and case law in a fraction of the time it would take a human researcher. This not only enhances efficiency but also allows legal professionals to access a broader range of information, facilitating more comprehensive and accurate legal analysis.

Moreover, AI has proven valuable in contract review and drafting. Machine learning algorithms can identify patterns, clauses, and potential risks in contracts, streamlining the review process. This not only reduces the likelihood of oversights but also enables legal professionals to focus on more complex and strategic aspects of their work.

Another significant impact is in the realm of predictive analytics. AI can analyze historical legal data to forecast potential case outcomes, aiding lawyers in developing more informed strategies. This data-driven approach enhances decision-making and enables legal professionals to offer more accurate advice to their clients.

Despite these advancements, the integration of AI in the legal profession is not without challenges. Ethical concerns, such as bias in algorithms, raise questions about the fairness and objectivity of AI-generated legal decisions. Ensuring that AI systems are trained on diverse and representative datasets becomes crucial to mitigate these issues.

Additionally, there is the fear of job displacement among legal professionals as routine tasks become automated. However, proponents argue that AI’s role is complementary, allowing lawyers to focus on higher-level tasks requiring critical thinking, creativity, and emotional intelligence. AI has undeniably reshaped the landscape of the legal profession. While enhancing efficiency and providing valuable insights, its integration necessitates careful consideration of ethical implications and the potential impact on employment dynamics. Striking a balance between harnessing AI’s capabilities and maintaining the core values of the legal profession remains a critical challenge for the future.

Artificial Intelligence (AI):

Although the use of intelligent machine to work like humans could be traced far back into history, modern understanding of AI can be said to begin in 1956 and the name ‘Artificial Intelligence’ was coined by John McCarthy at the Dartmouth Conference. Artificial Intelligence (AI) could be described as the processes involved when technology is used in carrying out tasks which ordinarily requires human efforts or natural intelligence. It is said to be an area of computer science that emphasizes the creation of intelligent machines such as robots that work and react like humans as well as perform chores done by human. Artificial Intelligence also known as Machine Learning, is a branch of computer science that focuses on building and managing technology that can learn to autonomously make decisions and carry out actions on behalf of human being. It refers to simulation of human intelligence by software –coded heuristics.

Researchers recently predicted that in the next ten years AI will outperform human beings in a lot of task like language translation, working as a surgeon among other activities. It is also believed

that there is a 50% probability of AI outsmarting human beings in all tasks in 45 years and automating all human jobs in 120 years.

Now, artificial intelligence (AI) is redefining what it means to be human. Its systems and processes have the potential to alter the human experience fundamentally. AI will affect not only public policy areas such as road safety and healthcare, but also human autonomy, relationships and dignity. It will affect lifestyles and professions, as well as the future course of human development

and the nature and scale of conflicts. It will change the relationships between communities and those between the individual, the state and corporations. AI offers tremendous benefits for all societies but also presents risks. These risks potentially include further division between the privileged and the unprivileged; the erosion of individual freedoms through ubiquitous surveillance; and the replacement of independent thought and judgement with automated control.

In Nigeria, the legal profession is not immune to the impact of technology, as it is rapidly changing the way legal services are delivered. One of the major ways technology transformed legal profession in Nigeria is through improved efficiency. Legal research that once required hours of manual searching through libraries can now be done within minutes using online database and search engine. Moreover, technology has made the storage of data and retrieval of legal documents easier with electronic filing systems. In Nigeria we have Law Pavilion, LawPadi as leading research software that made research more easier for lawyers and judges.

Legal and Ethical issues of AI on the Legal Profession in Nigeria:

Artificial Intelligence is increasingly being used by lawyers in their practice to increase the efficiency and accuracy of the services they render. Although AI offer numerous ground-breaking advantages, it is important to also consider the legal and ethical issues it has on the legal profession in general.

The issues most practitioners have with the introduction of AI technology to law are multifaceted. Apart from the fear of machines or robots taking over the practice of law, several other questions have been asked: Can AI be considered as a legal person and directly liable for damage? Can tasks performed entirely by AI be said to fall under the ‘practice of law’? Can AI be said to be a licensed legal practitioner within the ambit of available laws regulating legal practice? Since there is no direct legal framework regulating AI either in Nigeria or at international scene, allusions are made to case laws and interpretations gleaned from general legal principles in answering these questions.

Looking at the Nigerian Laws, can AI be considered a legal person and directly liable for damage? In Nigeria, the categories of juristic persons who may sue or be sued under Nigerian laws include; natural persons, companies incorporated under the Companies and Allied Matters Act, corporations aggregate and corporations sole with perpetual succession, certain unincorporated associations granted the status of legal personae by law such as; registered Trade Unions, partnerships, friendly societies or sole proprietorships. Despite the fact that AI is not granted juristic personality in Nigeria, the Nigerian courts will uphold, as always, the fundamental maxim in the administration of justice that stipulates as follows, “ubi jus ibi remedium” meaning, “where there is a wrong there is a remedy”. Hence, where there is no remedy provided by common law or statutes, the courts have been urged to create one. The court cannot, therefore, be deterred by the novelty of an action. For instance, anyone that has suffered loss or injury as a result of AI has several options for asserting claims for compensation against the manufacturer, owner, keeper, user, network provider, software provider, etc., under general torts law, contracts and statutes as the case may permit.

Since AI is not a legal person capable of rights and duties in law, it is unlikely that it will be legally liable for acts done by it. However, manufacturers’ liability under the tort of negligence can be resorted to in determining liability in cases of third party mishap occasioned by usage of artificial intelligence, robotics and automated Systems. This liability may shift and be placed on owners or operators of AI where the manufacturer is able to prove to the satisfaction of the court that reasonable steps were taken to mitigate risk from their end. According to Paragraph AD, European Parliament Resolution of 16 February, 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)):

whereas under the current legal framework robots cannot be held liable per se for acts or       omissions that cause damage to third parties; whereas the existing rules on liability cover cases where the cause of the robot’s act or omission can be traced back to a specific human agent such as the manufacturer, the operator, the owner or the user and where that agent could have foreseen and avoided the robot’s harmful behaviour; whereas, in addition, manufacturers, operators, owners or users could be held strictly liable for acts or omissions of a robot.

There are provisions of the criminal code Act that evokes the notion of Criminal liability in Nigeria, some of which include Sections 7, 22, 23, 28, 30 and chapter 5 of the same Act, also, Section 36(5) of the Constitution of the Federal Republic of Nigeria. To start with, Section 36 (12) of the Constitution of the Federal Republic of Nigeria clearly uses the word ‘person’ in enshrining one of the principles of legality that must be considered before anyone is tried or punished for any alleged or purported criminal offence. This shows that it is a sine qua non that before criminal liability is ascribed, it must be certain on whom the imputation is being made. Artificial intelligence entities under the Nigerian criminal laws are not recognized as having a legal personality to be subjects of the law.

Furthermore, under the Nigerian Jurisprudence adopted from the Common Law, there are two elements or ingredients for fixing criminal liability which are Actus Reus (the guilty act) and Mens Rea (the guilty mind) and generally, once one element is missing, there shall be no criminal liability. Mens Rea simply means to have ‘a guilty mind’ and the actus reus literally means ‘guilty act’ and generally refers to an overt act in furtherance of a crime. The above elements thereby introduce the question, ‘Do AIs have mental capacity so as to be held liable for an offence?’ It is generally believed that machines are excluded from criminal liability for lack of mental capacity to know that the nature of their act is one that would result in an offence (since it is the programming of the maker or command of the user and not theirs) or to have general or specific intent (since it can be argued that machines do not know good or bad except the commands given to it) . Thus, the mental element to make them criminally liable is missing.

The question: ‘Can AI be said to be a licensed legal practitioner within the ambit of available laws regulating legal practice?’ this cannot be affirmatively answered in Nigeria. To qualify as a legal practitioner in Nigeria, the individual must have a law degree, attended the Nigerian law school and obtained a qualifying Bar Certificate or a Certificate of Exemption. The Legal Practitioners Act specifically mentions that persons whose name appears on the Roll of Legal Practitioners kept by the Registrar of the Supreme Court shall be entitled to practice. Since a lawyer is a person

licensed to practice law, it presupposes therefore that a ‘legal practitioner’ must be a natural person. If it is established that only natural persons come within the scope of legal practitioner, one can therefore posit that a ‘robotic lawyer’ is unknown to Nigerian law as a person who could practice law. In addition, enrolment at the Supreme Court, payment of practicing fee and generally being ‘fit and proper’ further entitles an individual to practice and ultimately ascribes right of audience in the court. Even though AI applications can be used in case management, contract review, determining outcome of cases using specially built legal algorithms or other automated tasks, it is unlikely that a robot or AI application can fit into the description of a legal practitioner under the relevant Nigerian laws or be said be so licensed to practice law in the Nigerian context.

Another legal concern of AI in Nigeria is the ownership and protection of intellectual property rights of AI-generated works, whether they be art, music, or written content. Determining the legal status of AI-generated works and their protection under intellectual property laws can be complex.

AI-generated works, inventions, or innovations could raise issues related to intellectual property rights. There might be questions about the ownership and protection of AI-generated creations, inventions, and patents. Before now, computers and machines have been used to create works of art and other literary works. This however was not more than the result of the work put in by the programmer and operator, it therefore means that the computers or machines did not do more than paint brushes or pens used by artistes or writers. Copyright protection in this regard was not so difficult as the program used to create artistic works were more or less tools in the hands of the author. The AI seems to be completely different for copyright protection as the work is actually generated by the computer program independent of the programmer.

The ownership of copyright of works created by Artificial Intelligence is a lot more problematic because unlike general works created without AI, the computer program is more than a tool and it makes many decisions involved in the creative process without human intervention. The fulcrum of this problem is the originality of the work and originality is one of the conditions for copyright protection in Nigeria. Section 2(2)(a) of the Nigerian Copyright Act, 2022 provides as follows: ‘A literary, musical or artistic work shall not be eligible for copyright unless sufficient effort has been expended on making the work to give it original character’. This has been interpreted to mean that the Copyright Act is not concerned with the originality of ideas but with the expression of thought. As remarked in University of London Press v University Tutorial Press, originality does not require that the expression must be in an original form but that the work must not be copied from another work, but it should originate from the author. The language of the copyright Act and the courts is that copyright should originate from the author manifested by the expression of the author. In relation to Artificial Intelligence, the work is basically created by the computer program who enjoys neither legal nor corporate personality. In Nigeria, the basis for the protection of copyright is not beyond works created by persons (natural or corporate) and there is no provision for works created by AI. This is evidenced by Section 5 of the Copyright Act, 2022 which provides that copyright shall be conferred by this section on every work eligible for copyright of which the author or in the case of a work joint authorship any of the authors is at the time when the work is made, a qualified person, that is to say – an individual who is a citizen of, or is domiciled in Nigeria; or a body corporate incorporated by or under the laws of Nigeria. It has however been argued that the copyright of any work created by an AI program should be the work of the programmer as the copyright should belong to whoever has undertaken the necessary provisions for the creation of the work. This therefore implies that the copyright neither belongs to the robot or the artificial intelligence system but the person who created the robot or the intelligent system. This approach does not consider the work done by a robot or an artificial intelligence system as being done by those artificial systems but as products of the work done by the original creators of the system. This approach is conservative and seems to attempt to fill the vacuum in the law by ensuring that a party enjoys copyright of the work created by the system. In as much as this approach is not the best and forward-looking position, it is still a more desirable position than the work created by a robot or an artificial system does not meet the requirement to be protected by copyright because the author of a creative work is presumed to either be a natural person or a juristic person. This seems to be the approach under the Nigerian copyright law.

On data protection, considering the fact that AI is trained and developed with reliance on data, how that data is sourced and stored may grow into a course for concern as AI become wide spread. An example of this is the robot, Sophia. Under its profile on the Hanson Robotics website, it says that the robot will transmit information from its interactions to the central database. Such information, the website says, will be used in further developing the robot. If this data is stored in the cloud, under whose jurisdiction will it fall in the case that a claimant decides to file a suit concerning the wrongful mining of his data without his consent?

AI relies heavily on data for training and operation, and the handling of this data may raise major privacy concerns. The collection, storage, processing, and control of personal information must comply with data protection regulations. On June 14, 2023, President Bola Tinubu signed the Data Protection Act 2023 into law, and the objectives of the Act include safeguarding the rights and freedom of data subjects as guaranteed by the Constitution. Privacy in simple terms is the right not to be observed. Key data privacy risks in the use of AI systems include, but are not limited to unauthorized collection of data, data breaches, cross-domain data sharing, lack of proper data storage, etc. To mitigate these data privacy risks, the provision of the recent Nigerian Data Protection Act 2023 becomes applicable and relevant. The Act prohibits the unlawful processing of personal information, which consists of personal data and sensitive personal data of natural persons. For the purposes of the Act, “personal data” means any information relating directly or indirectly to an identified or identifiable individual, by reference to an identifier such as a name, an identification number, location data, an online identifier, or one or more factors specific to the physical, physiological, genetic, psychological, cultural, social, or economic identity of that individual. The Act also defines “sensitive personal data” as personal data relating to an individual’s genetic and biometric data, for the purpose of uniquely identifying a natural person; race or ethnic origin; religious or similar beliefs; health status; sex life; political opinions or affiliations; trade union memberships; and other information which may be prescribed by the Commission as sensitive personal data.

AI systems process vast amounts of sensitive data. Safeguarding client confidentiality and ensuring compliance with privacy laws become crucial to maintaining the trust and integrity of the legal profession. Hanson Robotics website says that the Robot “Sophia” will transmit information from its interactions to the central database. Such information, the website says, will be used in further developing the robot.

Based on this, lawyers are faced with the challenge of using AI tools that are most likely expose the confidential information of their clients to the world. Under the Nigerian Rules of Professional Conduct for Legal Practitioners, lawyers owe their clients general duty of confidentiality. This duty specifically requires a lawyer to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. The use of some AI tools may require client confidences to be “shared” with third party vendors. As a result, lawyers must take appropriate steps to ensure that their clients’ information is safeguarded. To minimize the risks of using AI, a lawyer should discuss with third-party AI providers the confidentiality safeguards in place. A lawyer should inquire about what type of information is going to be provided, how the information will be stored, what security measures are in place with respect to the storage of the information, and who is going to have access to the information. AI should not be used in the representation unless the lawyer is confident that the client’s confidential information will be secure.

Lawyers face several ethical considerations when utilizing AI tools in their practice. Some of these issues include: Confidentiality and Privacy, Bias and Fairness, Competence and Reliance, Transparency and Accountability, Client Consent and Communication, Unauthorized Practice of Law, Professional Integrity. Lawyers have a professional duty to maintain professional standards and ensure their use of AI is compatible with their ethical obligations under the Rules of Professional Conduct for Lawyer (The Rules). The Rules are intended to regulate professional conduct of lawyers and are binding upon all lawyers licensed in Nigeria. Some Rules are particularly relevant to a discussion of the use of AI in the legal profession.

The Rules of Professional Conduct imposes on lawyers a duty of competence, which, among other things, requires a lawyer to apply his learning and skill reasonably necessary for the representation of a client and to associate with other lawyer to discharge this duty where necessary. The duty of competence includes the duty to keep abreast of the changes in the law and its practice, including the benefits and risks associated with the use of relevant technology. The use of AI in the practice of law presents at least two competency issues to consider. First, lawyers have an ethical duty to understand the risks and benefits the use of AI tools for both lawyers and clients, and how they may be used (or should not be used) to provide competent representation to clients. Second, lawyers should consider how they can incorporate AI tools into their practices without compromising the competent representation of their clients. Although AI can be a powerful tool, the use of AI tools may have catastrophic results for both lawyers and clients if lawyers fail to vet any outputs prior to using them in their work. For example, two attorneys were sanctioned by a New York Federal Judge for submitting a brief authored by AI that referenced nonexistent case law. Finally, as AI tools become more sophisticated and their use in the legal profession becomes more widespread, lawyers will need to consider whether the failure to use an available AI tool would itself be a failure to meet the duty of competence.

Rule 52 of the Rule establishes the ethical standards on the reasonable fees a lawyer may charge a client. This is because the factors used in determining the reasonableness of a fee include time/labor, novelty of the issue, and customary fees, Novel fee issues can arise if a lawyer employs AI tools to perform some tasks in his representation of a client. Can a lawyer ethically bill a client for the work that an AI tool performs? Can an AI tool have an hourly rate? And how would a lawyer account for the “time” the AI tool “expended” to perform a particular task? Conversely, if a lawyer could use AI to perform certain tasks—ssuch as completing the first draft of a routine document, or reviewing a contract to ensure defined terms are used consistently but elects not to do so and instead performs the tasks himself and bills his client for the work at the lawyer’s standard hourly rate, has the lawyer charge the client an unconscionable fee in violation of Rule 52? The answers to these questions are not clear, but a lawyer may have an ethical obligation to employ available technology to provide legal services to a client more efficiently.

The duty of confidentiality is provided under Rule 19 of RPC, 2020 (as amended), which requires a lawyer to maintain as confidential all information the lawyer obtains from a client in the course of representing that client, unless the client authorizes its disclosure. Some AI tools do not guarantee the confidentiality of user inputs. For example, OpenAI, the creator of ChatGPT, discloses in its Terms of Service and related documents that a user’s “conversations may be reviewed” by OpenAI employees to “improve OpenAI’s system,” and OpenAI explicitly warns users not to “share any sensitive information in their conversations”.  Further, OpenAI’s Privacy Policy places the burden of maintaining confidentiality on users: “You should take special care in deciding what information you send to us via ChatGPT”. In order to comply with Rule 19, it is important that lawyers ensure the AI tools they employ have implemented measures to protect client information. Lawyers should review the terms of use and privacy policies of an AI tool before using it, and only use a particular tool when the lawyer is confident that the client’s confidential information is secured.

While a review of the Rules may assist lawyers in identifying potential issues in the ethical use of AI tools in their practices, the Rules also provide helpful guidance in identifying practical suggestions for incorporating AI into the practice of law. Lawyers should exercise care when deciding whether a particular AI tool would provide useful assistance in the representation of a client. Lawyers may, at times, need to consult with technology experts to understand an AI tool, how it works, and whether it can be usefully deployed in a particular client matter. Lawyers should obtain informed consent from clients regarding the use of AI tools in their legal matters.

Client should be made to understand the potential risks and benefits of AI technology and how it may impact their cases. AI tools may be used as a starting point in generating content, but AI-generated work product should never be presented as finished content or a lawyer’s final product. Lawyers have a professional obligation to thoroughly review any AI-generated work product to ensure the results are accurate. Lawyers should also be cautious when sharing client or firm data with AI tools. If the tool lacks robust confidentiality and data security, obtaining the client’s informed written consent is essential before using it. Additionally, lawyers should verify if any third parties can access the data to avoid compromising the attorney-client privilege. Finally, lawyers should not directly quote output from AI tools in work product sent to clients, opposing parties, or the courts. As discussed above, any AI outputs should be reviewed thoroughly before being incorporated into a preliminary draft or version of any attorney work product. This recommendation includes confirming the accuracy of any cases cited to support a particular argument.

Impact of Artificial Intelligence on the Legal Profession

With the right technology, legal practice can be managed from anywhere in the world, with or without the physical presence of the business owners. In recent times, it appears proficiency in technology is gradually becoming a requirement to ethically practice law. With AI, cases, documents, human and material resources of firms can be managed and tracked—timekeeping, scheduling, billing, invoicing, client relationship management and a lot of the administrative aspect of law firms can be efficiently handled. Practice management software offered by companies like NextCounsel, Legalpedia, among others are in high demand because of the ease with which tasks are handled when they are used.

At the forefront of the technological drive in Nigeria is Law Pavilion Business Solutions. With their packages like LawPavilion Prime, LawPavilion 360, LawPavilion CaseManager, and LawPavilion AI Suite, response time to clients is optimized, legal analysis or research is done seamlessly, and cases are managed with increased efficiency. Apart from these packages automating legal services, in 2018, the company went steps ahead in integrating AI into their legal software solutions with launch of Law Pavilion TIMI, an intelligent Legal Assistant Chat Bot which is the first of its kind in Nigeria. There is also an AI-Powered-Speech-to-Text-Transcription System which transcribes spoken words into written format, thereby reducing the stress of judges writing in longhand and gradually replacing the stenographic recording we are used to. In Nigeria, LawPadi provides an online legal advice system, legal resources, and virtual assistance on legal issues through a chatbot. In addition, they provide access to quality legal support at affordable fees with their digital tools. With legalpedia’s comprehensive set of technology solutions, legal practitioners are empowered to successfully adjust their practice to artificial intelligence-type tools and to activate other technological possibilities. Similarly, organizing and sharing legal resources, whether online or offline, is simplified with CompuLaw’s workflow tools. Technologically driven services like speedy legal research, library management, document automation, and verbatim reporting, among others, are available freely, on subscription or outright sale, depending on the package(s) deployed.

Artificial intelligence also benefits lawyers in the following ways: It saves time as it analyses more information, more thoroughly than humans, in a tiny fraction of time. This in turn helps to save cost and improve efficiency since less staff and time are involved in finding answers and identifying mistakes; the quality of work produced is better as human factors such as fatigue, sentiments and distraction are not present. It improves organizational and logical structure with automatic document comparison. Thus, lawyers can more quickly identify holes or gaps in their documents and even in their legal analysis.

AI and the way forward for Nigeria

Nigeria can be considered an artificial Intelligence (AI) champion on the African continent, being the first country in the region to institutionalize a National Centre for AI and Robotics (NCAIR); and the establishment of dedicated government institutions who are fostering a knowledge-based economy and promoting the research and development of AI systems in Nigeria. Nigeria is set to produce its first National AI strategy or policy.

A human-centric approach to data governance is imperative for a standardized set of data-protection rules, and to address ethical concerns around the collection, holding, and processing of citizens’ data. Nigeria’s Data Protection Regulation (NDPR) provides a legal framework for the use and exchange of electronic data. Also, Guidelines for the Management of Personal Data by Public Institutions in Nigeria was also introduced by the National Information Technology Development Agency (NITDA). Whilst these are progressive rules, there is a need for a comprehensive National data legislation and an ombudsman for Nigeria’s data governance. A National policy on AI should prioritize the regard for Nigeria’s democratic values; comply with Nigeria’s constitutional principles; and help to meet the socioeconomic needs of the Nigerian people. The policy should maintain standards of algorithmic accountability, data protection, explainability of decision-making by machine-learning models, and protection of the citizens’ human rights from infringements. Nigeria’s AI policy should emphasize the fundamental human rights provisions of the Nigerian constitution, particularly the right to privacy, non-discrimination, and the protection of the dignity of Nigerians. The policy should also align with supranational rights-respecting AI norms and standards that promote equality, inclusion, diversity, safety, fairness, transparency, and algorithmic accountability. The sudden resort to AI and other digital technologies because of the COVID-19 pandemic has created new vulnerabilities for the data of Nigerians to be commercialized and even weaponized. Therefore, issues of algorithmic bias, loss of privacy, lack of transparency, and the overall complexity of getting Nigerians to understand how they are interacting with AI, require policy considerations. AI assessments by themselves should not be a basis for decisions due to the probabilistic nature of the predictions. Nigeria’s AI policy should be critical about the extent to which AI systems can be relied on in certain public sectors and should limit or justify the use of such technology in areas of law enforcement, criminal justice, immigration, and national security. Nigeria should focus on the creation of easily accessible and affordable digital infrastructure, one that includes a spectrum of secure networks, computers, and storage capabilities that are required for the successful delivery of AI applications and services. The use of locally developed AI systems should be promoted while ensuring a transparent procurement process for AI systems from abroad. Such procurement process, if needed, should focus on mechanisms of algorithmic accountability and of transparency norms, with the opportunity for local knowledge transfer and long-term risk valuation.

Data is the fuel powering AI. Therefore, it is essential for the AI policy to support a standardized set of data-protection rules and address ethical concerns around the collection, holding, and processing of citizens’ data. It is important to note that all data is in the past and is subject to change. Furthermore, there needs to be a deliberate promotion of mutual trust between the AI institutions and the Nigerians who are the data subjects and deserve to know how their data is collected, stored, processed, shared, and potentially deleted. Data privacy frameworks are important to peg some of the threats linked to the use of AI. And so, Nigeria’s AI policy could impose limitations on the type of data that may be inferred, used, and shared. Whilst the NDPR and the matching guidelines from NITDA are progressive and commendable, they are largely insufficient in supporting Nigeria’s data governance and guaranteeing data privacy and protection in Nigeria. The NDPR will require periodic revisions, and more importantly, there is a need for a comprehensive legislation that enforces a rights-centric data protection obligation for the benefit of Nigerians. Because the government is currently the largest data processor, Nigeria also needs an independent data ombudsman. The Data Protection Act provides for the establishment of a Data Protection Commission with enforceable powers, and a code of practice that ensures a rights-respecting data governance framework for Nigeria. Although the bill is not exhaustive, this is a step in the right direction. Nigeria can become an empowered data society and a human-centric approach to data can help realize this.

Nigeria’s AI policy should emphasize the fundamental human rights provisions of the Nigerian constitution, particularly the right to privacy, non-discrimination, freedom of expression, and the protection of the dignity of Nigerians. The AI policy should also underscore the applicable UN human rights principles on business activities, as well as ensure compliance to the core documents that make up the International Bill of Rights.

Regulatory bodies in Nigeria should ensure that AI systems deployed under their purview do not inadvertently foster illegal discrimination, harmful stereotypes (including those centered on gender), and wider societal inequalities. Thus, exercising utmost caution when endorsing or employing AI systems in sensitive public policy domains like law enforcement, justice, asylum, and migration is imperative. Rigorous testing and validation of AI systems should be conducted before implementation, and these processes should persist throughout the system’s lifecycle, facilitated by regular audits and reviews. Regulators should establish comprehensive rules to counteract potential discriminatory impacts arising from AI systems, whether employed in public or commercial sectors. These stipulations must safeguard individuals from the adverse consequences of such systems, proportionate to the associated risks. These regulations should span the entire lifecycle of an AI system, encompassing tasks such as bridging gender data gaps, ensuring data sets’ representativeness, quality, and accuracy, optimizing algorithm design, system use, and rigorous testing and evaluation to identify and mitigate discrimination risks. Ensuring transparency and audibility of AI systems is crucial to detecting biases throughout their lifespan.

Regulatory bodies should actively advocate for diversity and gender balance within the AI workforce, encouraging consistent input from a diverse array of stakeholders. Enhancing awareness about the risks of discrimination, encompassing novel forms of bias within the context of AI, is paramount.

AI is transforming the workforce, and its use is poised to increase in the decades ahead. With a median age of about 18 years, and recording the second highest unemployment rate globally, Nigeria must massively expand upskilling and reskilling efforts within its teeming workforce to leverage the opportunities of the fourth industrial revolution, and to sustain the nation’s labour economy. NITDA’s work with the Digital States Initiative is promising and can be expanded. Also, Nigeria’s AI policy can respond to the technological disruptions underway by encouraging AI training and labour innovation (including AI research and collaboration) across states and sectors. The policy could also recommend incentives for enterprises in Nigeria that do not automate jobs completely to maintain employment for Nigerians; or companies that upskill instead of displacing workers. Nigeria’s education reform should also be an important aspect of this AI policy.

Conclusion

Technology is evolving at a very fast pace, and if not handled well, it will boomerang and get out of hand. Therefore, there is an urgent need to provide a workable legal framework for AI in Nigeria. The reality is that with the exponential advancement witnessed in the technology sector, the available civil rules in negligence and tort are inadequate. Specific legislation to statutorily control or monitor the making of intelligent machines and certify them safe for human usage is therefore important. In addition, instead of getting worried about AI, practitioners should stay focused and maximize the benefit the technological innovation to their advantage and that of their profession. This they can do by taking conscious steps to evolve with the changing times, be technologically inclined, and open to learning and/or working with technology tools. Technology has definitely changed our way of life; therefore, investment(s) in AI will definitely not be a waste because their prospects and advantages are numerous. Imagine applications that are never tired, never need caffeine to stay awake, never ask for work leaves or vacations, and still work smartly with high precision! Anyone or profession that fails to move with the technological wave of AI will definitely be left behind. The future holds great promise for the tech-savvy legal practitioners only if they decide to embrace it.

Source: @BarristerNg

lawpavilion

Recent Posts

5 Must-Have Skills for Lawyers to Succeed in 2025

Introduction The legal profession has always been known for its high standards and unique demands,…

2 days ago

Can the Court Impose a Willing Employee on an Unwilling Employer?

CASE TITLE: UNITY BANK PLC v. ALONGE (2024) LPELR-61898(CA) JUDGMENT DATE: 4TH APRIL, 2024 JUSTICES:…

4 days ago

Whether it is Necessary to Have Corroboration in A Rape Trial

CASE TITLE: ODIONYE v. FRN (2024) LPELR-62923(CA) JUDGMENT DATE: 5TH SEPTEMBER, 2024 PRACTICE AREA: CRIMINAL LAW…

4 days ago

Whether The Law on Limitation of Action Applies to Cases of Continuous Damage/Injury

CASE TITLE: EFFIONG v. MOBIL PRODUCING (NIG.) UNLTD (2024) LPELR-62930(CA)JUDGMENT DATE: 27TH SEPTEMBER, 2024PRACTICE AREA:…

4 days ago

Nature and Ingredients of The Offence of Criminal Trespass

CASE TITLE: ONWUSOR v. STATE (2024) LPELR-63031(CA) JUDGMENT DATE: 12TH NOVEMBER, 2024 PRACTICE AREA: CRIMINAL…

4 days ago

Illegality of Charging Protesters with Terrorism and Treason

By Femi Falana SAN Introduction Last week, President Bola Tinubu ordered the immediate termination of…

5 days ago