
November 2023 Recap: How is AI and Autonomy doing?
This article offers an in-depth look at topics related to Autonomous Systems and Responsible AI.
For expert-level training—including certification-based programs—on these topics and more, explore our Automotive trainings and Responsible AI trainings. To learn how we support product development, compliance, and organizational safety goals with consulting support, visit our Autonomous Product Development and Responsible Artificial Intelligence pages—or contact us directly.
It’s now been a year since OpenAI released an experimental chatbot dubbed “ChatGPT”. Since then, the terms “ChatGPT” and “AI” have mushroomed, creating a movement of “Citizen Developers” and pages and pages of school reports being created in seconds. But one thing we ALL learned in the past 12 months, is that ChatGPT can quickly create something that looks real, but without any fact checking involved. Well, have we ALL really learned?
3. False AI-generated allegations against four consultancy firms: The Guardian published on November 2, 2023 that case studies created by Google Bard AI provided incorrect statements, in which a group of academics used to claim against four consultancy firms. The article goes on to say, “The academics, who specialize in accounting, were urging a parliamentary inquiry into the ethics and professional accountability of the consultancy industry to consider broad regulation changes, including splitting up the big four. Part of the original submission relied on the Google Bard AI tool, which the responsible academic had only begun using that same week. The AI program generated several case studies about misconduct that were highlighted by the submission.” It was found that these case studies were fabricated, damaging the reputation of these consultancy firms. This occurred a year after the release of ChatGPT where there have been hundreds if not thousands of articles written on cases where ChatGPT made up information that, at first impression, looked very real. Is it the fault of Google Bard AI, or the academics that exploited the information given to them without fact checking?