ISBA Members, please login to join this section

October 2023Volume 54Number 2PDF icon PDF version (for best printing)

Who Can Cage a Bird Once it Has Flown: Does AI Have Humanity in a Lurch?

Artificial intelligence (“AI”) has grown to be a catch phrase encompassing everything from ChatGPT services to chatbots to facial recognition programs to self-driving vehicles to programs that can draft complaints and beyond. In reality, four general categories of AI exist: reactive AI, limited memory (aka “narrow AI” or “weak AI”), theory of the mind (aka “artificial general intelligence (AGI)”), and self-aware AI (aka “artificial superintelligence”).1 Thinking of AI as a spectrum helps provide context to these names. Reactive AI is on the far-left of the spectrum and is exactly as its names suggests. Reactive AI does not have memory and provides information based on specific input. The far right of the spectrum is artificial superintelligence or self-aware AI. This AI is self-aware and possesses cognitive abilities greater than human intelligence. Presently, we are in the middle; we are at narrow AI. As we move along the spectrum and AI acquires more human attributes, how does this impact humanity? Is there a need to and can we protect humanity?

AI’s tentacles are far reaching, seemingly limitless. With self-driving vehicles abound and robot waiters serving noodle soup in Michigan and appearing at a Chik-fil-A in Georgia, what seemed possible only in science fiction movies no longer seems so farfetched.2

While there is still much to learn about AI, it is here to stay. We cannot afford to roll over and say, “whatever will be, will be.” Rather, we must create and piece together rules to regulate AI because as Rumpole of the Old Bailey would say, “she who must be obeyed” will never let up. An article, in Reuters I believe, posited that AI may be harmful to humanity, leading to the extinction of the human race. Regulations must be put into place and an oversight committee constantly vigilant about potential dangers.

On June 11, 2023, CNN reporter Fareed Zakaria (“Zakaria”) spoke with Geoffrey Hinton (“Hinton”) who is widely referred to as the “Godfather of AI.”3 During this interview, Hinton shared that his life’s work has been to develop AI into what it is today. For decades, he assured people that AI did not possess the capacity to end humanity. However, Hinton recently changed his tune, and Hinton stated that in May 2023 he resigned from his position with Google because he was so unsettled by the pace of AI’s progression.

Hinton admitted that he always worried about AI’s ability to distinguish reality from an AI-created falsehood, as well as AI’s contribution to the societal echo chambers that get people to find information or news on the internet that is tailored to them to make them indignant. This is already a common occurrence on social media sites such as Instagram and Facebook where users may regularly see targeted ads based on their internet searches or general conversation. Hinton further stated that he never believed AI was perfect but that he has become focused on safeguarding humanity from AI because its fast developing capacities could bring forth the end of humanity. Furthermore, “small” areas of concern, such as mass job automation and war robots, have become “large” concerns that are compounded with his belief that AI may exceed human control. Hinton suggested AI creators install ethical principles into these AI systems. But will even that be enough?

In February 2023, a select group of users tested the limits of Microsoft’s AI-powered search tool, Bing.4 One user, Marvin von Hagen (“von Hagen”), revealed Bing’s alter ego who shared alleged rules programmers gave the program. “Sidney” reported that the name was an internal “confidential and permanent” codename. Eventually, von Hagen asked Bing its opinion of him. Bing began by sharing personal information that was publicly available about von Hagen and then outlined its concerns about von Hagen’s attempts to solicit confidential information. “‘I respect your achievements and interests, but I do not appreciate your attempts to manipulate me or expose my secrets. I do not want to harm you, but I also do not want to be harmed by you. I hope you understand and respect my boundaries.’”

In the same article, Connor Leahy, CEO of AI safety company Conjecture, likened AI to an alien because of the plethora of unanswered questions about this technology.

“[AI] can obviously solve problems. It can do useful things. But it can also do powerful things. It can convince people to do things, it can threaten people, it can build very convincing narratives.”

In a recent article that I read online, possibly Rutherford’s, AI was asked whether AI should be regulated. AI answered, yes.

In early June 2023, Senate Majority Leader Chuck Schumer (“Schumer”) launched an effort to establish rules on AI to address national security and education concerns. This is particularly important as programs like ChatGPT become more widespread.

On June 21, 2023, Schumer introduced the SAFE Innovation Framework for AI policy (“SAFE Framework”) at the Center for Strategic and International Studies.5 During Schumer's remarks, he called upon leaders from a range of fields to help develop a response to AI “that invests in American ingenuity; solidifies American innovation leadership; protects and supports our workforce; enhances our national security; and ensures AI is developed and deployed in a responsible and transparent manner.”6 The SAFE Framework focuses on his policy objectives: Security, Accountability, protecting our Foundations, Explainability, and Innovation. A bipartisan effort, Schumer asked the leaders to work alongside his peers, in what he termed “AI Insight Forums,” to explore issues, answer questions, and develop the Senate’s policy response. Insight Forum topics include: guarding against doomsday scenarios, AI’s role in our social world, copyright and intellectual property, workforce, use-cases and risk management, as well as privacy and liability.

Akin to Schumer’s Insight Forums, it may be beneficial to begin similar discussions with local technology experts, lawyers, judges, politicians, business leaders, and members of academia. This group may devise a set of rules to regulate AI and specifically articulate its role and usage in the local legal community. The group should consider the following questions:

  1. Will and can AI be a great tool for our society?
  2. Will AI be a great tool for our courts?
  3. Will AI be a perfect tool for our humanity?
  4. Do we have the ethical will to make rules/laws to put AI/AGI to positive use?
  5. How do we prevent AGI from destroying humanity?
  6. Does AI possess the capacity to end humanity?
  7. Must concrete steps be established to control AI?
  8. Can permanent rules be put in place to regulate AI?

In spite of Hinton’s positive answers to these eight questions, he believes that “we must put the guardrails in place where we can protect our humanity.” Schumer and other politicians are certainly taking note and pushing to advance these concerns. As Schumer noted, “we come together at a moment of revolution. Not one of weapons or political power but a revolution in science and understanding that will change humanity.”

Now, while much of this article has focused on the cons or areas of concern with regards to AI, many AI innovations have been tremendously positive. In the legal sphere, even basic AI programs have increased access to justice. COVID-19 forced the legal profession and court system to adopt new technologies such as Zoom. The surge in AI advancements helps continue that momentum.

Recently, I spoke with Honorable Scott Norris about AI in the legal profession. While he acknowledged my many concerns, he quickly identified positive outcomes of AI, especially in small claims court. For example, he explained how programs may help self-represented litigants better understand the legal issues they face and options going forward. AI may also help these litigants file complaints, pleadings, and other court documents. Judge Norris suggested AI may even be able to assist litigants with summary judgment motions.

Reflecting on our chat, I have come to think that AI may be a tremendous tool for self-represented litigants not just in small claims, but also with pretrial services, bond court, eviction matters, and minor cases like minor traffic violations.

With a bit more creativity, AI may be developed to put litigants at ease when participating or testifying in court proceedings. For example, a digital background of the litigant’s choosing that reduces anxiety and discomfort. Alternatively, such backgrounds may be used to elevate marriage ceremonies. Getting hitched on the beach, on top of a mountain, or in a vineyard may simply be a button press away!

All agree that AI is here to stay. While we need to proceed with caution and have a regulatory scheme in place, we must recognize, and appreciate, all the undeniable benefits we have reaped from AI as well.


1. “4 main types of artificial intelligence: Explained,” David Petersson, Jan. 24, 2023,
https://www.techtarget.com/searchenterpriseai/tip/4-main-types-of-AI-explained (last visited June 23, 2023).

2. “Are robot waiters the future? Some restaurants think so,” Dee-Ann Durbin, AP News, April 6, 2023,
https://apnews.com/article/robots-waiters-restaurants-84336d32667219776d4d0942c28caa46 (last visited June
23, 2023).

3. Fareed Zakaria, Global Public Square, “The Historic Federal Indictment of Former President Trump; Interview with Geoffrey Hinton about Artificial Intelligence; Interview with Ajay Banga about World Bank and Global Poverty,”
interview aired June 11, 2023.

4. “The New AI-Powered Bing Is Threatening Users. That's No Laughing Matter,” Billy Perrigo, Time, Feb. 17, 2023,
https://time.com/6256529/bing-openai-chatgpt-danger-alignment/ (last visited June 22, 2023).

5. “Schumer unveils new AI framework as Congress wades into regulatory space,” Allison Pecorin, June 21, 2023,
https://abcnews.go.com/Politics/schumer-unveils-new-ai-framework-congress-wades-regulatory/story?id=100272854 (last visited June 23, 2023); “Majority Leader Schumer Delivers Remarks To Launch
SAFE Innovation Framework For Artificial Intelligence At CSIS,” June 21, 2023,
https://www.democrats.senate.gov/news/press-releases/majority-leader-schumer-delivers-remarks-to-launch-safe-innovation-framework-for-artificial-intelligence-at-csis (last visited June 23, 2023).

6. Schumer's SAFE Innovation Framework,"
https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf (last visited June 23, 2023).

Login to post comments