robinandthe7hoods
14 hours ago
When Tobin Smith talked to Gabriel Rene near the end of October,he was told the results of all 26 Atari games would be released,not just one game.
I doubt Rene would lie to a well-known newsletter writer with wide appeal and who put $100,000 of his own money in the stock.
So, the rest of the results should be forthcoming any time,whatever they are.
A newsletter writer could be the death knell for a small company trying to attract investors.
The 3rd week of January will see something coming out,imo.
All the holidays will be over,an
d Rene is calling Q2 for public release of Genius,which begins in March.
They will need a huge money raise to get it done,along with marketing.
Bandicoot_Inv
5 days ago
Dec. 23, 2024 VERSES AI Inc. (CBOE:VERS) (OTCQB:VRSSF) a cognitive computing company specializing in next-generation intelligent systems, announces that the Company has extended its exclusive contract with Professor Karl Friston, acclaimed neuroscientist, to continue his role as Chief Scientist. The extension includes additional performance-based incentives that recognize recent breakthroughs in research and development and contributions to its flagship product Genius™ and further expected technological advancements.
“Our work with Professor Friston has exceeded our expectations,” said Gabriel René, founder and CEO of VERSES. “His revolutionary theories on Active Inference have been central to the creation and continued development of Genius, as well as our broader cognitive computing initiatives. We believe the results from Genius have been nothing less than spectacular, some of which we previewed with the Mastermind exercise recently. The recent success in the Atari benchmarks further underscores the transformative potential of these systems, and we look forward to sharing more results in the coming weeks. We are thrilled to deepen this collaboration as we build on these milestones and drive innovation forward together.”
Professor Friston, whose partnership with VERSES is central to advancing the practical application of his research, shared his thoughts on the renewed agreement:
“I am delighted to be able to extend my work with a company so deeply committed to translating the right scientific principles into transformative and sustainable applications,” said Professor Friston. “VERSES’ steadfast dedication to pioneering the commercialization of Active Inference research is already yielding exceptional results. My continued commitment to VERSES reflects not only a shared belief in this ambitious vision, but also a dedication to advancing the groundbreaking work we have undertaken together. I eagerly anticipate contributing to this mission in the years to come — as we aim to shape the future of intelligent ecosystems.”
https://www.globenewswire.com/news-release/2024/12/23/3001222/0/en/VERSES-Renews-Agreement-with-Chief-Scientist-Karl-Friston-to-Continue-Innovation.html
Bandicoot_Inv
2 weeks ago
VERSES Genius™ Outperforms OpenAI Model in Code-Breaking Challenge, “Mastermind”
High-Performance Agent Surpasses Leading AI Model in Accuracy, Speed, and Cost Efficiency
VANCOUVER, British Columbia, Dec. 17, 2024 (GLOBE NEWSWIRE) -- VERSES AI Inc. (CBOE:VERS) (OTCQB:VRSSF) ("VERSES'' or the "Company”), a cognitive computing company, today revealed performance highlights of its flagship product Genius winning the code-breaking game Mastermind in a side by side comparison with a leading generative AI model, OpenAI’s o1 Preview, which is positioned as an industry-leading reasoning model. Over one hundred test runs, Genius consistently outperformed OpenAI’s o1-preview model one hundred and forty (140) times faster and more than five thousand times (5,000) cheaper.
“Today we’re showcasing Genius’ advanced reasoning performance against state-of-the-art deep learning-based methods that LLMs are based on,” said Hari Thiruvengada, VERSES Chief Technology Officer. “Mastermind was the perfect choice for this test because it requires reasoning through each step logically, predicting the cause-and-effect outcomes of its decisions, and dynamically adapting to crack the code. This exercise demonstrates how Genius outperforms tasks requiring logical and cause-effect reasoning, while exposing the inherent limitations of correlational language-based approaches in today’s leading reasoning models.
“This is just a preview of what’s to come. We’re excited to show how additional reasoning capabilities, available in Genius today and demonstrated with Mastermind, will be further showcased in our upcoming Atari 10k benchmark results,” Thiruvengada continued.
The comparison involved 100 games of Mastermind, a reasoning task requiring the models to deduce a hidden code through logical guesses informed by feedback hints. Key metrics included success rate, computation time, number of guesses, and total cost.
In the exercise, VERSES compared OpenAI advanced reasoning model o1-preview to Genius. Each model attempted to crack the Mastermind code on 100 games with up to ten guesses to crack the code. Each model is given a hint for each guess and must reason about the missing part of the correct answer, requiring all six code colors to be correct to crack the code. For perspective, you can play the game at mastermindgame.org.
A highlight of the results is below. You can find a more detailed description and results of the tests on our blog at verses.ai.
The exercise: VERSES’ team conducted 100 games for each AI model, using the same secret code parameters: 4 positions and 6 possible colors. Results were measured by success rate, computation time, number of guesses, and total cost. The comparison is summarized below:
Metric Genius™ o1-preview
Success Rate 100%
71% (29% fail rate)
Total Compute Time 5 minutes, 18 seconds
(Avg 3.1s per game) 12.5 hours
(Avg 345s per game)
Total Cost for 100 Games $0.05 USD (est.) $263 USD
Hardware Requirements Standard laptop (M1) GPU-based Cloud
Performance Highlights:
Accuracy and Reliability. Genius solved the code every time in a consistent number of steps.
Speed. Genius consistently solved games in 1.1–4.5 seconds, while ChatGPT’s solve times ranged from 7.9 to 889 seconds (approximately 15 mins)
Efficiency. Genius’ total compute time for 100 games was just over 5 minutes, compared to ChatGPT’s 12.5 hours.
Cost. Genius’ compute cost was estimated at $0.05 USD for all 100 games, compared to ChatGPT’s o1 model at $263 USD.
In summary, Genius solved Mastermind 100% of the time, was 140 times faster and 5260 times cheaper than o1-preview.
“These impressive results highlight a critical gap in today’s AI landscape: the limitations of language-based models like OpenAI’s o1 to handle logical reasoning tasks precisely and reliably,” said Gabriel René, founder and CEO of VERSES. “Mastermind code-breaking is an indicative test that showcases the class of logical reasoning and understanding of cause and effect needed for real-world applications like cybersecurity, fraud detection, and financial forecasting—domains where causality, accuracy, and efficiency are non-negotiable. Genius not only excels at these tasks but does so faster, cheaper, and with unparalleled consistency, making it ideal for addressing complex business challenges. Genius not only excels at these tasks but does so faster, cheaper, and with unparalleled consistency, making it ideal for addressing complex business challenges.”
Mastermind™ is a registered trademark of Pressman Inc.
https://www.globenewswire.com/news-release/2024/12/17/2998249/0/en/VERSES-Genius-Outperforms-OpenAI-Model-in-Code-Breaking-Challenge-Mastermind.html