Could An Electrical Pandemic Destroy Shipping?
by John Konrad (gCaptain) As China’s lockdown lifts and the world looks to the future with hope that this marks the final chapter of the worst pandemic since the Great...
by John Konrad (gCaptain) gCaptain has obtained a troubling report from an American captain, who claims that a shipowner utilized an AI bot, potentially ChatGPT, to suggest modifications to the ship’s voyage plan. While the captain chose not to adhere to the recommendations, they are concerned that others may not exercise the same caution.
gCaptain was asked not to reveal the name of the captain or specific details about the incident which could be traced back to them. With the limited size of the US flag fleet this request has severely limited our ability to report on the story, but we can provide an overview of this troubling report.
During a recent ship’s voyage planning – a process in which navigational routes, safety measures, and logistical considerations are meticulously evaluated, and the crew works to optimize efficiency while mitigating potential hazards – meeting several minor safety concerns about the ship’s route and cargo that worried the ship’s officers.
The captain composed a message with the concerns and emailed it to the shipowner for guidance. Typically in these situations, a maritime superintendent working for the shipowner will review any concerns with various experts (e.g. experienced captains, cargo planners, engineers) in the officer and provide guidance to the ship captain on ways to mitigate risk.
In this case, the captain received a quick reply. The captain informed gCaptain that the first and final paragraphs of the message contained pleasantries and appears to be written by a human, but the advice on how to mitigate risk “sounds like an AI wrote it”. This message included advice that would have required a course change during the voyage.,
At this point, it’s difficult to prove the message was created by an AI, but it’s certainly in a style similar to that produced by AI. As a former Captain, I can confirm the advice provided was rather trite and unlikely to put the ship in harm’s way but the point of having International Safety Management procedures is so that captains with more experience shoreside – and other shoreside managers who might be privy to critical information not yet shared with the captain – can confirm there is no immediate danger.
We asked ChatGPT-4 it’s opinion on the subject with the following prompt. “Is it ok for chatgpt to advise on safety critical operations like ship voyage planning?” The AI responded with this:
As an AI language model, ChatGPT is not designed or certified to provide advice on safety-critical operations like ship voyage planning. While ChatGPT can generate text based on the input it receives, its understanding of complex, real-world situations and safety considerations is limited.
Safety-critical operations, such as ship voyage planning, demand the expertise and judgment of experienced professionals who can evaluate risks and contingencies in a comprehensive manner. Relying on AI-generated advice for such operations could lead to potentially dangerous situations and unintended consequences. It is essential that any advice or suggestions made by AI systems be carefully assessed and verified by qualified human experts before implementation, especially in safety-critical domains.ChatGPT’s response to Is it ok for chatgpt to advise on safety critical operations like ship voyage planning?
Yesterday, ChatGPT announced version 4 with advanced analytic processing. With this new AI engine, it will be even more tempting for shoreside workers and ships officers – who are often overwhelmed with paperwork – to respond to concerns with the help of AI.
Conversely, AI has the potential to discover hazards and opportunities for risk mitigation in voyage planning that humans may overlook. gCaptain has received reports of ChatGPT helping to identify potential hazards not considered during the voayge planning process. Preventing the use of AI altogether – or just hampering its use with comments like “I don’t trust those robots” – could dissuade ship captains from using it and discovering a new solution that will keep his crew safe. This could even provide additional job opportunities in the future as ChatGPT suggestions could be reviewed and edited by an experienced captain working alongside a “prompt engineer” (a prompt engineer is someone skilled in asking AI the right kinds of questions).
Regardless of if the use of AI can be confirmed, in this incident, it’s critical that organizations like IMO, USCG and INTERTANKO discuss the implications of AI use and provide guidance because – whether it’s from taking the advice of AI or not consulting it – it’s only a matter of time before a major ship incident occurs and lives are lost.
In the meantime, shipping companies could simply be more transparent and notify the captain of any messages sent to a ship including the help of AI. This notification can be as simple as “ChatGPT was used to correct grammar in this email” to “here are some additional ideas to that problem created by Bing AI”.
Join the 89,091 members that receive our newsletter.
Have a news tip? Let us know.