- March 7, 2025
- Posted by: Bikramjit Singh
- Category: Blogs
A Visual Positioning System (VPS) is a navigation and positioning system which relies on computer vision and machine learning technology for estimating the device location through information perceived from the environment. While traditional GPS belongs to a satellite kind of navigation system that can sometimes be imprecise especially in large urban centers or indoors, VPS utilizes real-time videos and visual reference map that it has captures beforehand to determine the location of an object accurately. Static VPS perhaps has the greater potential for enabling applications that require appreciably higher order accuracy of location determination through the processing of visual input in the form of landmarks, texture or any architectural features.
VPS depends on the use of high-end cameras, sensors and AI algorithms that are able to detect features of the environment and then compare them to a recognized database. This technology has a special usefulness in those situations where GPS signals can be very low or disrupted at all – for example, indoors, in basements, etc, or in densely populated city areas. Besides, VPS can be combined with AR; they can improve navigation application by providing the ability to add digital guidance assets onto the real environment. This makes the technology highly valuable for a variety of industries from retail to self-driving cars due to its unique flexibility spanning fast-paced environments as well as its real-time performance.
| Parameter | Visual Positioning System (VPS) | Global Positioning System (GPS) |
| Technology | Uses visual data (images/videos) captured by cameras and 3D maps. | Relies on satellite signals for triangulation. |
| Accuracy | Centimetre-level accuracy, especially in visually mapped areas. | Typically accurate within 5–10 meters in open spaces. |
| Environment | Effective indoors and outdoors, including urban areas. | Best suited for open outdoor environments with clear sky access. |
| Infrastructure | Requires pre-mapped environments, cameras, and compatible software. | Depends on satellite networks maintained by global organizations. |
| Use Cases | Augmented reality, robotics, indoor navigation, and industrial applications. | Outdoor navigation, emergency services, and vehicle/ship tracking. |
| Scalability | Requires ongoing mapping updates for dynamic environments. | Provides universal scalability with global coverage. |
| Cost | Costs depend on creating/maintaining maps and processing visual data. | Free for civilian use; device-specific costs apply. |
| Strengths | High precision in complex environments; works indoors. | Universal availability; reliable for outdoor navigation. |
| Weaknesses | Limited to areas with pre-mapped visuals; requires advanced hardware. | Loses accuracy indoors or in obstructed environments. |
Simultaneous Localization and Mapping (SLAM)
SLAM is technique used in robots and other autonomous systems whereby the the robot builds a map of the environment it is operating in as it also mens its location within the same environment. To recognize features and obstacles and propel in real-time and simultaneously build the map, the system employs not only cameras, LiDARs, and IMUs (Inertial Measurement Units). SLAM is mainly applied to robotics, cars, drones, and Augmented Reality gadgets. SLAM constructs and enhances a map as the device traverses; hence it enhances the gadget’s position from the sensor information; therefore, the surroundings are not mapped statically.
LiDAR (Light Detection and Ranging)
LiDAR is a remote sensing technology which is used by emitting Laser pulse and analyzing the reflected pulses to produce high resolution three dimensional map of the landscape. It functions in the low or no light conditions and is applied in topographical surveys, in forestry and in self-driving cars etc. They use it in self-driving cars, environmental surveys and while doing excavations among other uses. LiDAR instruments emit laser beams and determine the time it will take for each beam to be reflected by an object. Time of flight data aids in creating correct 3D maps.
Recent Advancements and Partnerships in Visual Positioning Systems (VPS)
Niantic’s Expansion of Lightship VPS (2024)
In the year 2024, Niantic extended the Lightship VPS (Virtual Positioning System) to deliver a much-improved Augmented Reality experience in various locations around the world. This expansion was designed to grant developers rich set of tools to develop location-based AR applications of the superior accuracy and reliability of mapping outdoor and indoor environments. Niantic worked with governments, culture departments, and other developers internationally to add layers of real-world objects, including famous buildings and areas, so that augmented reality in the game will be even more sophisticated and sensitive to its environment. These improvements make Lightship VPS ideal for dynamic gaming, public and private wayfinding and tourism in line with Niantic’s vision of bringing people together through real-world AR experiences. This initiative consolidates Niantic’s position in the AR market as well as enabling the creators to redefine spatial computing.
Draganfly and Vermeer Partnership (2024)
In the year 2024, Draganfly, a drone organization collaborated with Vermeer, a digital mapping solutions providing organization with a view to enhancing its VPS (Virtual Positioning System) aspects for aerial operation. This cooperation concerns the improvement of the unmanned aerial vehicle spatial positioning and orientation, especially in complex and industrial areas. Incorporating Vermeer’s innovative mapping software into Draganfly’s UAV platforms, the two companies envision providing precise geolocation insights which will be useful in agriculture, construction, and disaster response. This shows they are fully behind moving drone capability one step forward by providing comprehensive and accurate positioning for location services.
Google Maps Immersive View Launch (2023)
In 2023, Google launched Immersive View in Google Maps that merges VPS with best-in-class 3D mapping to offer the users a 3D live tour of selected cities. This feature uses relevant data from weather, traffic and lighting conditions that enable one to navigate over city landscapes virtually. Google working with municipalities, urban planners and developers to tailor Immersive View to areas where it is most likely to generate a lot of interest. This innovation improves upon navigation, tourism, and geographical discovery of an area; not to mention revolutionizing digital mapping as we know it.
Meta’s Project Aria Collaboration (2023)
In 2023, Meta embarked on a novel project, known as Project Aria, focused on developing high-quality Virtual Positioning Systems for AR glasses. This project is a major advancement in the wearable augmented reality (AR) concept, which has a goal of enriching the first-person experience in indoors and urban landscapes. Meta worked with multipole technology organizations, research institutions, and urban planners to create an efficient VPS solution considering practical uses. The primary goal of these strategic collaborations was to develop a platform that would work well within Meta’s extensive virtual reality environment by making use of contextual information and location to enrich the application of AR eyewear.
Apple’s ARKit 7 VPS Launch (2023)
Apple launched ARKit 7 with a refined VPS for creating AR-based navigational system or exploration experience. Thanks to LiDAR and AI-based spatial modeling, VPS, Apple’s Virtual iOS Path Solution, makes AR applications provide highly accurate indoor navigation and interaction with the surrounding environment. Apple worked with developers and massive retailers to apply this technology in shopping centers and various large premises, complementing the ARKit library and constant growth of VPS functions best suited for iOS equipment.
Market Analysis
The market space for the global Vision Positioning System (VPS) is growing at a healthy rate and expected to soar in the near future mainly upheld by the industrial divisions that include industrial automation, defense, health care, retailing, and logistics. The market is currently valued at USD 5.58 billion in 2017 The market’s growth is predicted at a CAGR of 10.65 percent to reach approximately USD 9.25 billion by 2022 and at 11.7 percent over the period 2022-32 to achieve USD 27.4 billion.
- VPS technologies have a critical role in changing the existing industrial process through increased automation. Unmanned Aerial Vehicles (UAVs), Automated Guided Vehicles (AGVs), and robotics are now instrumental tools of manufacturing, delivering, and warehousing services with accuracy.
- The use of AI-driven optical sensors and Machine learning has encouraged the use of volumetric point clouds in various sectors. Such innovations make it possible to position objects and people, navigate accurately, and obtain analytics in real time.
Source: marketresearchfuture
Key Patents in Visual Positioning Systems (VPS)
White Raven Ltd – Visual Positioning System (VPS) for GPS-Denied Environments
Current localization systems including GPS are known to give unreliable readings in regions where signal strength is low, for instance, in urban areas also known as ‘‘urban canyons,’ ‘indoors,’ or ‘‘underground.’’ Although solutions such as Wi-Fi or RF based positioning exist, the availability of networks is an issue and accuracy is insufficient. Moreover, older vision-based approaches used in navigation are very computationally expensive thus cannot efficiently work on real-time or on constrained platforms e.g., self-driving car or a mobile robot.
This patent (US20200401617A1) it presents Visual Positioning System (VPS) that address distinct disparities of GPS through image-based self-positioning. It has reference image database to geotagged images connected to the database of known positions. Using the feature vector the query image captured using device is compared against these reference images in order to decide the position of the image. Several issues such as the identification of specific directional features due to repetitive patterns frequently found in urban environments, and constant corrections brought on by system response to ambient conditions may lead to high computational loads but enhanced precision. It also ensures dependable localization especially in GPS restricted environments, and at the same time is resource friendly in terms of computational and communication resources.
Google LLC – Position-Based Location Indication and Device Control Using Visual Positioning Systems (VPS)
In the concept of smart homes or offices, for instance, the management and control of devices is frequently done remotely, which in many instances can be complex and even prone to errors. Previous approaches follow basic, intricate two dimensional plans designed on paper and are not precise when it comes to larger three dimensional spaces and are confusing when devices share the locale or are placed within the same area but in different storey. These inaccuracies result in operational inefficiencies, and potential mistakes, in the process of either locating or commanding devices within a given region.
This patent entitled ‘System and method for 3D localization and control of devices with a visual positioning system’ (US11054918B2) proposes a novel system that employs a VPS to accurately determine, and manage, devices within a 3D framework. Users spatially view a target location using a computing device, for instance, mobile phone from various perspectives. Using the user aim, the system estimates rays with the intention of finding out the precise location of the target device. Once identified, the user has the ability to manage the device through a GUI appropriate to its use of the system. This method seeks to provide accurate manner of managing the devices, minimizes complication in setup and also extends the usability in an operation of managing the devices in extensive multi-dimensional environment.
Apple Inc – Method and System for Providing Visual Feedback of Map View Changes
One common pain point for users is the ability to switch between various types of map views, such as 2D and 3D maps, on navigation and mapping apps. It becomes a little confusing as to understand the map’s various perspectives where there is no clear visual feedback displayed. Current systems do not incorporate smooth means of conveying changes of state, which is not natural and results in problems for users to perform operations or move across maps.
The patent also seeks to offer an approach for offering visual feedback during switch between map views. It employs virtual camera to make animations, and there are smooth shifts between for instance between the 2D and 3D view. The virtual camera is programmed to possess a fixed operating path while adapted for changing directions in order to ensure that a proper and effective field of view is captured. This approach is helpful for the users since it provides the ability for a visual transition space giving the user directions, thus improving the usability.
Labrador Systems Inc – VPS for Navigation in Varied Lighting Conditions
Currently, robots depend highly on technologies like GPS; these are very Vi; effective when in an environment with constant changes in lighting such as, between day light and shadowed area or even being used indoors where the lighting is poor. Such systems cause problems in particular scenarios due to the fact that most Visual SLAM systems existent in the literature require lighting constancy, resulting in unpredictable localization and obstacle identification. They are thereby well suited only for structures rather rigid environments such as hospital or school rooms and not at all for houses and offices.
The patent (US11898848B2) brings a VPS working based on infrared cameras and illumination sources for effective navigation in different lighting conditions. A flood infrared illumination source guarantees that the device can determine its location by using the light environment, which is constant. At the same time, an ordered infrared lighting system tracks stationary objects and motion. Controlled by a processor, these components allow for a single, reconfigurable Visual SLAM map to operate in varying lighting conditions. This system enhances the certainty of navigation for moving robotic devices, self-driving cars, and wearable technologies in realistic settings.