AI lacks humanity and conscience
· Citizen

If ever there was a solid indication that artificial intelligence (AI) has already moved into a potentially damaging, uncontrollable phase, it’s the news that American prosecutors are investigating AI chatbot ChatGPT for criminal liability in a murder case.
Reports are that Phoenix Ikner, who allegedly opened fire on the Florida State University campus last year, killing two people and wounding six others, had asked ChatGPT beforehand which weapon and ammunition would be best suited for his attack.
Visit mchezo.co.za for more information.
Florida attorney-general James Uthmeier is now criminally probing ChatGPT maker OpenAI because the AI platform gave Ikner the requested information.
OpenAI denies responsibility, but the case echoes civil actions taken in other instances where AI bots are alleged to have encouraged people to commit suicide.
Big business has been sued before because of the damage caused by its products, but legal experts say this case is different, because it can be said that a company product encouraged the commission of a crime.
It would appear that the algorithmic safeguards are not strong enough, but, in any event, AI is learning, and changing, exponentially.
One of the things it is apparently not developing, however, is something resembling a human conscience.
And that should worry humanity about what the future might bring.