It’s now been a year since OpenAI released an experimental chatbot dubbed “ChatGPT”. Since then, the terms “ChatGPT” and “AI” have mushroomed, creating a movement of “Citizen Developers” and pages and pages of school reports being created in seconds. But one thing we ALL learned in the past 12 months, is that ChatGPT can quickly create something that looks real, but without any fact checking involved. Well, have we ALL really learned?
1. Man killed by a robot in South Korea: As reported on November 8, 2023 by BBC, a robot was unable to distinguish a man from boxes of food it was handling. “The incident occurred when the man, a robotics company employee in his 40s, was inspecting the robot. The robotic arm, confusing the man for a box of vegetables, grabbed him and pushed his body against the conveyer belt, crushing his face and chest, South Korean news agency Yonhap said.” This was a very unfortunate event, which seems to have been avoidable. The man was performing some form of maintenance on the robot’s sensors when the event occurred. Functional safety standards like IEC 61508 have processes required during maintenance which would require the safe maintenance activities of safety related equipment.
2. Microsoft’s AI-generated poll showed up next to an article about the death of a 21-year-old woman, speculating the cause of death: Business Insider reported on November 1, 2023 that a newspaper publisher claimed Microsoft damaged its reputation by placing an insensitive poll, generated by AI, next to an article about the death of a woman. “An AI-generated poll asking readers to vote on whether they thought the woman had died by murder, suicide or accident appeared next to the article on Microsoft Start.” This triggered anger from readers and the news organization. When we understand the aspects of Responsible AI, one of the important starting points is establishing AI Principles and Policies. Published on Microsoft’s website, dated March 10, 2023, one of Microsoft’s AI Principles is “Inclusive and respectful”. It reads: “The first principle of AI that Microsoft follows is ensuring that AI technology is inclusive and respectful of human rights. AI should be designed and used to enhance human capability, not replace it. This means that AI systems should not discriminate against people based on their race, gender, religion, or any other characteristic. They should be accessible to everyone, including people with disabilities. Moreover, AI should respect human dignity, privacy, and autonomy.” It seems this incident is in violation of Microsoft’s first principle.
3. False AI-generated allegations against four consultancy firms: The Guardian published on November 2, 2023 that case studies created by Google Bard AI provided incorrect statements, in which a group of academics used to claim against four consultancy firms. The article goes on to say, “The academics, who specialize in accounting, were urging a parliamentary inquiry into the ethics and professional accountability of the consultancy industry to consider broad regulation changes, including splitting up the big four. Part of the original submission relied on the Google Bard AI tool, which the responsible academic had only begun using that same week. The AI program generated several case studies about misconduct that were highlighted by the submission.” It was found that these case studies were fabricated, damaging the reputation of these consultancy firms. This occurred a year after the release of ChatGPT where there have been hundreds if not thousands of articles written on cases where ChatGPT made up information that, at first impression, looked very real. Is it the fault of Google Bard AI, or the academics that exploited the information given to them without fact checking?