Some optional, non-mandatory jurisdiction may qualify as a "yes."
If all decisions are actually made by AI but they are presented as if they were made by human judges, it does not qualify as a "yes".
For context:
The Brazilian judiciary uses a relatively high level of technology compared to other countries. It has implemented various digital systems and tools to help streamline its processes and improve efficiency. This includes using computers and other digital devices to manage case files, communicate with parties involved in a case, and make decisions. The use of technology in the Brazilian judiciary has increased significantly in recent years and is expected to continue to evolve.
It doesn't seem like AIs would actually be responsible for the decisions in this scenario, would they? More like a ChatGPT suggesting the basic structure of the sentences / interlocutory decisions, but human judges would still be accountable for the final decision:
> "Já a ferramenta de inteligência artificial auxiliará as atividades de julgamento com a geração de relatórios dos autos, localização e resumo de peças, citações, jurisprudência ou argumentos citados, além da apresentação de propostas de texto para decisões interlocutórias, sentenças e acórdãos."
@FranklinBaldo
Yeah, It's one thing for AI to help out, but another for it to decide cases.
But:
As AI gets smarter, it's likely we'll see it more in court. Still, judges might not want to just rubber-stamp AI's work. They'll probably start with AI handling the easy stuff. It's all about taking it step by step, making sure everyone's on board and that trust is built along the way.
If an AI is used to automatically assess whether a given legal action is eligible for a decision under the system of authoritative precedents (example), does this count as a YES?
What about using AIs to take administrative decisions (e.g., accepting/denying requests for outstanding arrears, employee benefit requests by civil servants etc.), does that qualify as a YES?