VERSES® Unveils Robotics Architecture that Works Without Pre-Training
VERSES AI (OTCQB: VRSSF) has unveiled a groundbreaking robotics architecture that outperforms existing solutions without requiring pre-training. The company's model achieved a 66.5% success rate across household tasks compared to 54.7% for the previous best alternative, which needed 1.3 billion training steps.
The VERSES system successfully completed tasks like tidying rooms, preparing groceries, and setting tables through three integrated modules: vision for environment mapping, planning for task breakdown, and control for movement execution. Unlike traditional robotics solutions that either rely on pre-programming or extensive deep learning training, VERSES' approach enables robots to adapt to new situations through active environmental exploration.
This breakthrough could revolutionize automation across industries, from warehouses and factories to domestic applications, by eliminating the need for costly training and improving adaptability to changing environments.
VERSES AI (OTCQB: VRSSF) ha presentato una rivoluzionaria architettura robotica che supera le soluzioni esistenti senza richiedere pre-addestramento. Il modello dell'azienda ha ottenuto un 66,5% di successo nelle attività domestiche, rispetto al 54,7% della migliore alternativa precedente, che però necessitava di 1,3 miliardi di passaggi di addestramento.
Il sistema VERSES ha portato a termine con successo compiti come riordinare le stanze, preparare la spesa e apparecchiare la tavola grazie a tre moduli integrati: la visione per mappare l'ambiente, la pianificazione per scomporre i compiti e il controllo per eseguire i movimenti. Diversamente dalle soluzioni tradizionali, basate su programmazione predefinita o su lunghi addestramenti di deep learning, l'approccio di VERSES permette ai robot di adattarsi a nuove situazioni mediante l'esplorazione attiva dell'ambiente.
Questa innovazione potrebbe trasformare l'automazione in settori che vanno dai magazzini e dalle fabbriche agli ambiti domestici, eliminando la necessità di costosi addestramenti e aumentando la capacità di adattamento ai cambiamenti ambientali.
VERSES AI (OTCQB: VRSSF) ha presentado una arquitectura robótica innovadora que supera las soluciones existentes sin requerir preentrenamiento. El modelo de la compañía alcanzó un 66,5% de éxito en tareas domésticas frente al 54,7% de la mejor alternativa anterior, que necesitaba 1.300 millones de pasos de entrenamiento.
El sistema VERSES completó con éxito tareas como ordenar habitaciones, preparar la compra y poner la mesa mediante tres módulos integrados: visión para mapear el entorno, planificación para descomponer las tareas y control para ejecutar los movimientos. A diferencia de las soluciones tradicionales, que dependen de programación previa o de un extenso entrenamiento de deep learning, el enfoque de VERSES permite que los robots se adapten a situaciones nuevas mediante la exploración activa del entorno.
Este avance podría revolucionar la automatización en sectores que van desde almacenes y fábricas hasta aplicaciones domésticas, al eliminar la necesidad de costosos entrenamientos y mejorar la adaptabilidad ante cambios en el entorno.
VERSES AI (OTCQB: VRSSF)는 사전 학습 없이도 기존 솔루션을 능가하는 획기적인 로봇 아키텍처를 공개했습니다. 회사의 모델은 가정용 작업에서 66.5%의 성공률을 기록했으며, 이는 13억 학습 단계가 필요했던 이전 최고 대안의 54.7%를 웃도는 성과입니다.
VERSES 시스템은 환경을 매핑하는 비전, 작업을 세분화하는 계획, 동작을 수행하는 제어 등 세 가지 통합 모듈을 통해 방 정리, 장보기 준비, 식탁 차리기 같은 작업을 성공적으로 완료했습니다. 기존에 사전 프로그래밍이나 대규모 딥러닝 학습에 의존하던 로봇 솔루션과 달리, VERSES의 접근법은 로봇이 환경을 능동적으로 탐색하면서 새로운 상황에 적응할 수 있게 합니다.
이 혁신은 비용이 많이 드는 학습 과정을 없애고 변화하는 환경에 대한 적응성을 높여 창고와 공장부터 가정용 응용까지 다양한 산업 분야의 자동화를 바꿀 잠재력을 가지고 있습니다.
VERSES AI (OTCQB: VRSSF) a dévoilé une architecture robotique révolutionnaire qui surpasse les solutions existantes sans nécessiter de pré-entraînement. Le modèle de la société a atteint un taux de réussite de 66,5% pour les tâches domestiques, contre 54,7% pour la meilleure alternative précédente, qui nécessitait 1,3 milliard d'étapes d'entraînement.
Le système VERSES a accompli avec succès des tâches telles que ranger des pièces, préparer des courses et mettre la table grâce à trois modules intégrés : vision pour cartographier l'environnement, planification pour décomposer les tâches et contrôle pour exécuter les mouvements. Contrairement aux solutions robotiques traditionnelles qui reposent soit sur une programmation préalable soit sur un entraînement profond intensif, l'approche de VERSES permet aux robots de s'adapter à de nouvelles situations via une exploration active de leur environnement.
Cette avancée pourrait révolutionner l'automatisation dans des secteurs allant des entrepôts et usines aux usages domestiques, en supprimant la nécessité d'entraînements coûteux et en améliorant l'adaptabilité face aux environnements changeants.
VERSES AI (OTCQB: VRSSF) hat eine bahnbrechende Robotik-Architektur vorgestellt, die bestehende Lösungen ohne Vortraining übertrifft. Das Modell des Unternehmens erzielte eine 66,5%ige Erfolgsrate bei Haushaltsaufgaben gegenüber 54,7% für die bisher beste Alternative, die 1,3 Milliarden Trainingsschritte benötigte.
Das VERSES-System bewältigte erfolgreich Aufgaben wie Aufräumen, Einkäufe vorbereiten und den Tisch decken durch drei integrierte Module: Vision zur Umfeldkartierung, Planung zur Aufgabenzerlegung und Steuerung zur Ausführung von Bewegungen. Im Gegensatz zu traditionellen Robotiklösungen, die auf Vorprogrammierung oder umfangreichem Deep-Learning-Training beruhen, ermöglicht der Ansatz von VERSES Robotern, sich durch aktive Erkundung der Umgebung an neue Situationen anzupassen.
Dieser Durchbruch könnte die Automatisierung in Branchen von Lagerhäusern und Fabriken bis hin zum häuslichen Bereich revolutionieren, indem teure Trainings entfallen und die Anpassungsfähigkeit an sich verändernde Umgebungen verbessert wird.
- Achieved 66.5% success rate in tasks, outperforming the previous best rate of 54.7%
- Requires no pre-training, compared to competitors needing 1.3 billion training steps
- Technology enables robots to adapt to new environments without reprogramming
- Potential applications across multiple industries including warehouses, factories, and homes
- Current success rate of 66.5% still leaves significant room for improvement
- Technology is still in development phase with limited real-world implementation data
VERSES multi-agent robotics model show to outperform current methods on Meta’s Habitat Benchmark without pre-training
VANCOUVER, British Columbia, Aug. 14, 2025 (GLOBE NEWSWIRE) -- VERSES AI Inc. (CBOE: VERS) (OTCQB: VRSSF) ("VERSES” or the "Company”), a cognitive computing company specializing in next-generation agentic software systems, today unveiled details on the development of its robotics model.
The VERSES robotics architecture accomplished typical household tasks of tidying the room, preparing the groceries, and setting the table, better than other robotics models and VERSES accomplished the tasks without any pre-training. A video of the robot performing these tasks can be seen on the VERSES website https://www.verses.ai/blog.
Robots often perform well on scripted tasks but can freeze when faced with new situations; even something as simple as a box in the wrong place can halt progress. Newer approaches can be more flexible but require huge amounts of training to be effective. This makes existing robotics solutions difficult to use in real-world applications where new situations constantly arise. Challenges like this are well suited to VERSES models’ abilities for quickly adapting to their environment.
“Currently robotics systems are often brittle, and need huge amounts of training data, which makes them expensive and prone to going wrong.” said Sean Wallingford former CEO and President of Swisslog, one of the world’s leading logistics automation companies. “For instance, if you bring a robot to a new factory or ask it to do a different job, it will need a lot of re-training and may not be reliable. VERSES breakthroughs are exciting, because they offer an alternative approach. If we can deploy robots without training, they will be viable in a wide range of activities, from factories and warehouses to domestic and commercial applications.”
In a published paper entitled, “Mobile Manipulation with Active Inference for Long-Horizon Rearrangement Tasks” written by members of the Company’s research lab, the VERSES robotics model is compared to a deep learning alternative in three tasks: tidying a room, preparing groceries, and setting a table. The VERSES robotics model achieved a success rate of
“I believe that by combining our world-modeling and our active inference capabilities, we’ve shown robots can think on their ‘feet’ — navigating and completing complex tasks without months of costly training.” Hari Thiruvengada, VERSES CTO, said, “Our breakthrough has the potential to transform how robots operate across industries, from factories and warehouses to homes and public spaces, potentially unlocking a new era of truly adaptive, reliable automation.”
Notes to editors
- The leading model in the Habitat rearrangement challenge was “Multi-skill mobile manipulation”. Further details can be found here: https://aihabitat.org/challenge/2022_rearrange/
- Further details can be found below:
Robots generally fall into two categories: drive-by-wire or deep learning. Drive by wire means everything is pre-programmed. Deep learning relies on vast amounts of data for training.
Autonomous Guided Vehicles (AGVs)
The drive by wire approach breaks down if anything is out of place.
For instance, a human might program a robot to move an object to a location by providing a very detailed list of tasks in the form of a plan (e.g. “pick an item from the shelf and place it on the shelf”) down to the specific movements needed by each joint on a robot’s arm.
However, factories and homes are always changing. Robots often struggle to adapt, which can cause them to stop or work very slowly.
To overcome their inherent limitations, robotics environments are often controlled. For instance, robots may be placed in a cage or in factory areas where no humans are allowed. This practice greatly reduces the robots' usefulness.
Deep learning approaches
Deep learning approaches by contrast, are trained on vast volumes of data, so that they are more flexible.
However, these methods struggle with situations outside their training. Simple issues, like a bottle falling over or a chair being out of place, can confuse and paralyze the robot as it cannot adapt.
For instance a robot replenishing a production line in a factory may not be adaptable enough to switch aisles, when its initial route is blocked. Or if it is placing a bowl on a table, it may not be able to adapt to existing objects such as a wine glass, even if it has placed them there itself.
VERSES solution
We have solved this problem of adaptability.
When a human needs to get a drink in a new apartment, they don't execute by having practiced this task in hundreds of different apartments, they are able to adapt because they have a model of how the world works. This allows humans to figure out that they need to open the refrigerator and grab a bottle. VERSES technology equips robots with a world model, allowing it to execute three tasks in different apartment layouts.
VERSES models, similar to our work on the AXIOM digital brain, don’t require any pre-training and instead just adapt by exploring the environment.
VERSES models consist of three modules working together:
- Vision: Taking pixels and turning them into understanding as well as mapping the room it is in.
- Planning: It can take a task such as setting the table for dinner and plan out all the subtasks (e.g. opening a drawer, and putting cutlery on the table) without needing detailed instructions -
- Control: Translating these into all the specific movements of the robot and its arm.
At each stage, the VERSES system can adapt - for instance it can cope with unexpected objects in its way, or needing to pick up something it has dropped.
In a paper which we will present at the International Workshop on Active Inference later this year, we will demonstrate how the VERSES models compare to a leading model, across three tasks ‘TidyHouse’, ‘PrepareGroceries’, and ‘SetTable’.
Comparing the three tasks combined, VERSES achieved a
Critically, the VERSES model needs no training. All the VERSES model news is basic knowledge such as its own arm resting pose when idle or how much resistance the arm will gets from obstacles. By contrast the baseline model requires extensive offline training of 6400 episodes per task and 100 million steps per skill across a total of 7 skills, such as picking up an object or opening a fridge.
Use cases for this work include moving inventory around factories and warehouses.
The paper can be found at https://arxiv.org/abs/2507.17338 and additional details are available at https://www.verses.ai/blog
About VERSES
VERSES® is a cognitive computing company building next-generation agentic software systems modeled after the wisdom and genius of Nature. Designed around first principles found in science, physics and biology, our flagship product, Genius,™ is an agentic enterprise intelligence platform designed to generate reliable domain-specific predictions and decisions under uncertainty. Imagine a Smarter World that elevates human potential through technology inspired by Nature. Learn more at verses.ai, LinkedIn and X.
On behalf of the Company
Gabriel René, Founder & CEO, VERSES AI Inc.
Press Inquiries: press@verses.ai
Investor Relations Inquiries
James Christodoulou, Chief Financial Officer
ir@verses.ai, +1(212)970-8889
Cautionary Note Regarding Forward-Looking Statements
This news release contains statements which constitute “forward-looking information” or “forward-looking statements” within the meaning of applicable securities laws, including statements regarding the plans, intentions, beliefs and current expectations of the Company with respect to future business activities and plans of the Company. Forward-looking information and forward-looking statements are often identified by the words “may”, “would”, “could”, “should”, “will”, “intend”, “plan”, “anticipate”, “believe”, “estimate”, “expect” or similar expressions. More particularly and without limitation, this news release contains forward–looking statements and information including, but not limited to that the Company’s robotics models have the potential to transform how robots operate across industries, and that the Company’s robotics models could unlock a new era of truly adaptive, reliable automation.
The forward–looking statements and information are based on certain key expectations and assumptions made by the management of the Company. As a result, there can be no assurance that such plans will be completed as proposed or at all. Such forward-looking statements are based on a number of assumptions of management. Although management of the Company believes that the expectations and assumptions on which such forward-looking statements and information are based are reasonable, undue reliance should not be placed on the forward–looking statements and information since no assurance can be given that they will prove to be correct.
Forward-looking statements and information are provided for the purpose of providing information about the current expectations and plans of management of the Company relating to the future. Readers are cautioned that reliance on such statements and information may not be appropriate for other purposes, such as making investment decisions. Since forward–looking statements and information address future events and conditions, by their very nature they involve inherent risks and uncertainties. Actual results could differ materially from those currently anticipated due to a number of factors and risks. Accordingly, readers should not place undue reliance on the forward–looking statements and information contained in this news release.
The forward–looking statements and information contained in this news release are made as of the date hereof and no undertaking is given to update publicly or revise any forward–looking statements or information, whether as a result of new information, future events or otherwise, unless so required by applicable securities laws. The forward-looking statements or information contained in this news release are expressly qualified by this cautionary statement.
