The propensity for AI programs to make errors that people miss has been on full show within the US authorized system as of late. The follies started when attorneys submitted paperwork citing circumstances that didn’t exist. Comparable errors quickly unfold to different roles within the courts. Final December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, regardless of being an professional on AI and misinformation himself.
Now, judges are experimenting with generative AI too. Some consider that with the correct precautions, the know-how can expedite authorized analysis, summarize circumstances, draft routine orders, and total assist pace up the court docket system, which is badly backlogged in lots of elements of the US. Are they proper to be so assured in it? Learn the total story.
—James O’Donnell
What you could have missed about GPT-5
OpenAI’s new GPT-5 mannequin was supposed to present a glimpse of AI’s latest frontier. It was meant to mark a leap towards the “synthetic basic intelligence” that tech’s evangelists have promised will rework humanity for the higher.
Towards these expectations, the mannequin has largely underwhelmed. However there’s one different factor to take from all this. Amongst different strategies for potential makes use of of its fashions, OpenAI has begun explicitly telling individuals to make use of them for well being recommendation. It’s a change in method that indicators the corporate is wading into harmful waters. Learn the total story.
—James O’Donnell
This story initially appeared in The Algorithm, our weekly publication on AI. To get tales like this in your inbox first, enroll right here.

