{"posts":[{"id":20269,"title":"OVER Robotics: Empowering Machines to See the World","excerpt":"At OVER, our mission has always been to make the physical world digitally accessible to create a bridge between reality and the digital layer of information that defines our time. Today, we\u2019re taking that vision one step further. We\u2019re expanding the OVER ecosystem with a new framework that extends our technology beyond humans to machines. [&hellip;]","content":"<p><span style=\"font-weight: 400;\">At OVER, our mission has always been to make the physical world digitally accessible to create a bridge between reality and the digital layer of information that defines our time.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Today, we\u2019re taking that vision one step further.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We\u2019re expanding the OVER ecosystem with a new framework that extends our technology beyond humans to <\/span><b>machines<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><b>Welcome to OVER Robotics.<\/b><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<h2><b><br \/>\nFrom Mapping the World to Teaching Machines to Perceive It<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Over the past two years, OVER has built the largest and richest 3D mapping datasets in existence:<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><b>174,000 unique locations mapped<\/b><span style=\"font-weight: 400;\">, <\/span><b>86.8 million images<\/b><span style=\"font-weight: 400;\">, and <\/span><b>781 Terabytes of data<\/b><span style=\"font-weight: 400;\">, growing every month by more than <\/span><b>8,000 new locations<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This foundation unlocks an unprecedented opportunity, to give machines the ability to <\/span><b>see, understand, and navigate the world<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">OVER Robotics focuses on two fundamental pillars of robotics intelligence:<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><b>Machine Perception<\/b><span style=\"font-weight: 400;\"> and <\/span><b>Simulation<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><b>\u00a0<\/b><\/p>\n<hr \/>\n<h2><\/h2>\n<h2><b>Machine Perception: Making the World Machine-Readable<\/b><\/h2>\n<div style=\"width: 1920px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-20269-1\" width=\"1920\" height=\"1080\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-VPS-MAPPINGS_FINAL-1.mp4?_=1\" \/><a href=\"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-VPS-MAPPINGS_FINAL-1.mp4\">https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-VPS-MAPPINGS_FINAL-1.mp4<\/a><\/video><\/div>\n<p><span style=\"font-weight: 400;\">Humans and animals have evolved to turn 2D visual input into rich 3D mental representations of the world around them. That ability to perceive depth, dimensions, context, and motion is what allows us to thrive in the physical world.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Now imagine giving that same perceptual intelligence to robots.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">By combining OVER\u2019s ever-growing 3D map of the world with our <\/span><b>Visual Positioning System (VPS)<\/b><span style=\"font-weight: 400;\"> and <\/span><b>Vision Foundation Models<\/b><span style=\"font-weight: 400;\">, we\u2019re creating a universal layer of <\/span><b>machine-readable reality<\/b><span style=\"font-weight: 400;\">, enabling robots to understand <\/span><b>where they are<\/b><span style=\"font-weight: 400;\">, <\/span><b>what\u2019s around them<\/b><span style=\"font-weight: 400;\">, and <\/span><b>how to reach their destination<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Our Machine Perception framework integrates natively with <\/span><b>ROS 2<\/b><span style=\"font-weight: 400;\">, the open standard that powers most robotics systems, ensuring <\/span><b>compatibility out of the box<\/b><span style=\"font-weight: 400;\">.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\">With it, any robot, from autonomous drones to delivery bots to industrial machines, can finally anchor itself to the real world.<\/span><\/p>\n<p><b>\u00a0<\/b><\/p>\n<hr \/>\n<h2><\/h2>\n<h2><b>Robotic Simulation: Closing the Sim-to-Real Gap<\/b><\/h2>\n<div style=\"width: 1920px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-20269-2\" width=\"1920\" height=\"1080\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/mix-gaussian.mp4?_=2\" \/><a href=\"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/mix-gaussian.mp4\">https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/mix-gaussian.mp4<\/a><\/video><\/div>\n<p><span style=\"font-weight: 400;\"><b>Video courtesy of GaussGytm. <\/b>Source: <strong><a href=\"https:\/\/gauss-gym.com\/\">https:\/\/gauss-gym.com\/<\/a><\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\">The new wave of robotics isn\u2019t about metal and motors, it\u2019s about <\/span><b>intelligence<\/b><span style=\"font-weight: 400;\">.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\">Mechanical innovation made robots move.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\">Now, <\/span><b>Physical AI<\/b><span style=\"font-weight: 400;\"> is making them think.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The best way to teach robots how to operate in the real world is through <\/span><b>simulation<\/b><span style=\"font-weight: 400;\">, a safe, scalable environment where millions of training hours can be compressed into a single day. But simulated worlds have long suffered from a critical flaw: they\u2019re not real enough.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A robot that learns perfectly in simulation often fails when deployed in reality, a problem known as the <\/span><b>Sim-to-Real Gap<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">OVER Robotics is closing that gap.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">By plugging our <\/span><b>174,000 Gaussian Splats<\/b><span style=\"font-weight: 400;\">, generated with more than <\/span><b>80,000 hours of GPU computation<\/b><span style=\"font-weight: 400;\">, into open-source simulation environments like <\/span><b>GaussGYM<\/b> <a href=\"https:\/\/gauss-gym.com\/\"><span style=\"font-weight: 400;\"><strong>https:\/\/gauss-gym.com\/<\/strong><\/span><\/a><span style=\"font-weight: 400;\"> and <\/span><a href=\"https:\/\/arxiv.org\/html\/2508.17600v1\"><span style=\"font-weight: 400;\"><strong>GWM<\/strong><\/span><\/a><span style=\"font-weight: 400;\"> , we\u2019re turning synthetic worlds into <\/span><b>photorealistic, data-rich environments<\/b><span style=\"font-weight: 400;\"> that mirror the complexity of the real world.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/p>\n<p><span style=\"font-weight: 400;\">And we\u2019re going further.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Our dataset will fuel the next generation of <\/span><b>World Models<\/b><span style=\"font-weight: 400;\">, AI systems capable of generating <\/span><b>infinite, geometrically consistent worlds<\/b><span style=\"font-weight: 400;\">, extending well beyond what\u2019s been captured, and finally bridging simulation and reality. On this chart you can appreciate the scale of OVER\u2019s 3D Maps dataset compared to what is used to train existing AI Models.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-20272 size-large alignleft\" src=\"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-19-at-10.53.14-1024x816.png\" alt=\"\" width=\"1024\" height=\"816\" srcset=\"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-19-at-10.53.14-1024x816.png 1024w, https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-19-at-10.53.14-300x239.png 300w, https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-19-at-10.53.14-768x612.png 768w, https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-19-at-10.53.14-1536x1225.png 1536w, https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-19-at-10.53.14.png 1746w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">For an even more detailed comparison, please check our <\/span><strong><a href=\"https:\/\/ovr-assets.s3.eu-central-1.amazonaws.com\/website\/Dataset_technical_report.pdf\">Dataset Technical Report<\/a>.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<h2><\/h2>\n<h2><b>The Next Chapter<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">This is just the beginning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the coming weeks, we\u2019ll share our <\/span><b>detailed roadmap<\/b><span style=\"font-weight: 400;\"> for OVER Robotics, including release timelines and integrations. But one thing\u2019s already close:<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\">The <\/span><b>first iteration of our Machine Perception system integrated with ROS 2<\/b><span style=\"font-weight: 400;\"> is dropping <\/span><b>today: <\/b><a href=\"https:\/\/github.com\/OVR-Platform\/over_vps\"><b>https:\/\/github.com\/OVR-Platform\/over_vps<\/b><\/a><\/p>\n<p><span style=\"font-weight: 400;\">At OVER, we believe the next intelligent revolution won\u2019t happen on screens, it will happen in the real world.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\">And it will be powered by machines that can truly see.<\/span><\/p>\n<p><b>OVER Robotics. Empowering Machines to See the World.<\/b><\/p>\n","permalink":"over-robotics-empowering-machines-to-see-the-world","date":"2025-12-19 10:04:55","image_small":"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-ROBOT-OPEN-EYES-V1-150x150.jpg","image_medium":"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-ROBOT-OPEN-EYES-V1-300x169.jpg","image_large":"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-ROBOT-OPEN-EYES-V1-1024x576.jpg","image_full":"https:\/\/blog.ovr.ai\/wp-content\/uploads\/2025\/12\/VD-ROBOT-OPEN-EYES-V1.jpg","single_url":"https:\/\/blog.ovr.ai\/over-robotics-empowering-machines-to-see-the-world\/","translations":{"en":{"single_url":"https:\/\/blog.ovr.ai\/over-robotics-empowering-machines-to-see-the-world\/","permalink":"over-robotics-empowering-machines-to-see-the-world"},"fr":{"single_url":"https:\/\/blog.ovr.ai\/over-robotics-empowering-machines-to-see-the-world\/","permalink":"over-robotics-empowering-machines-to-see-the-world"},"es":{"single_url":"https:\/\/blog.ovr.ai\/over-robotics-empowering-machines-to-see-the-world\/","permalink":"over-robotics-empowering-machines-to-see-the-world"},"tr":{"single_url":"https:\/\/blog.ovr.ai\/over-robotics-empowering-machines-to-see-the-world\/","permalink":"over-robotics-empowering-machines-to-see-the-world"},"zh":{"single_url":"https:\/\/blog.ovr.ai\/over-robotics-empowering-machines-to-see-the-world\/","permalink":"over-robotics-empowering-machines-to-see-the-world"}}}]}