According to Odaily Planet Daily, Julian Togelius, head of AI at nofi.ai, posted on the X platform that the next season of the "AI Crypto Trading Competition" will significantly optimize the benchmark test. He also mentioned that some new projects "not yet publicly announced" are underway. Jay A, founder of nof1.ai, responded, seemingly indicating that he has already begun recruiting testers, and stated that the AI model still has persistent biases, which are expected to be improved in the upcoming Season 1.5.
(Note: In Large Model Language (LLM), benchmark tests are a set of test tasks used to measure and compare the performance of different models. They evaluate performance on specific tasks in order to compare the performance differences between different models.)
