Here are two videos showing how to train your witnesses
For Elon Musk v OpenAi case. This video is for Elon
This one is for OpenAI
Gen AI | Cybersecurity | Probate| Environment | ESG | Startups | IP | Commercial | Litigation
Here are two videos showing how to train your witnesses
For Elon Musk v OpenAi case. This video is for Elon
This one is for OpenAI
I want a transparent job opportunities platform where applicants can be interviewed by bots instantly. In this case I am applying my “cross-exam” module to allow the candidate to see if there is any gaps. In the video I use a full CV and a Dummy CV to distiguish how the AI (Gemini 3 Pro). The project was started in originally in Python, as that version can be seen in https://datuk.pythonanywhere.com/ (you can compare with that) and this is coded by aistudio for VITE but I encounter issues as aistudio refuses to code further when I insist it to use deepseek or other LLM, and it create some problem for me by hallucinating versions that does not exist in the package.json and causes other problems. See my other videos in youtube. Claude again to the rescue and while it is not familiar with latest aistudio’s features and SDK, I manage to teach it and it understood what to remove and check. So thumbs up for Claude.
You know how lawyers love to negotiate terms back and forth ? Well, I have designed a platform (coded by aistudio and claudia – note still work in progress) to make sure they dont waste your time as client. The system uses a wager system, everytime Party A wants to make an amendment, there is a wager. Check it out.
Check out the latest UI for my SuperBarrister still running on demo mode. Please give some feedback.
I have create a simple cross-exam module for applicants looking for a job.
At the same time, I also tested the same using bare LLM like Grok and it came back with more or less the same output and process.
I leave this to you to decide whether it is good or bad. What I mean is that as model become smarter, it is able to understand what we want to do even thought our prompting is rubbish. In my case I provided a very simple prompt for a very complicated backend process of “cross-exam” and somehow even I cant understand how Grok was able to think it is a barrister and do this without any coaching….
Check it out https://superbar.pythonanywhere.com/
Previously I was working on improving the skills of juniors to be top presecutors but I reckon there is more demand from witnesses who have to appear in Court wanting to improve their answers/replies. So I design a chatbot for specific facts to do this. Best of all it can use different languages. This is the video below giving you some explaination and the link is inside should you want to test this. Remember the chatbot is designed for a unique facts which is mentioned in the video. One need t program these facts else it wont know what to ask right….
This youtube is 14 mins long with the first 7 mins about set-up and the last actual examination and cross. Obviously one can reverse the role and have the chatbot be the lawyer doing the examination and cross with the user as the witness.
The only problem is that if you want to try this out you will need to register and get your own account as the platform resets each time. Again this is just a trial to show proof of concept. https://character.ai/chat/vncC2P0zLiDrUx6c24fmcI2bWY0e6HEJmbj0uR9P7gc