Cortica’s revolutionary automotive visual intelligence platform is built on the foundation of a mature, patented, self-learning technology. The robust signature based representation and bottom-up, fine-grain, unsupervised learning capabilities enable a more detailed, comprehensive and precise interpretation of the car’s surroundings. The lightweight and efficient computational framework fortfies autonomous vehicles with the power of Autonomous AI. Cortica's AI operates at the core of four product lines addressing the intricacies and complexities of driving with vehicles that are entirely autonomous.
The Cortica platform garners a deep understanding of the car’s environment by immediately recognizing generic and granular classes of objects to the level of full scene reconstruction and prediction.The technology recognizes 10,000+ fine grain concepts.
With fine grain recognition the system identifies everything from pedestrians with baby strollers, to hoverboards, to individuals walking while looking at their smartphone. The robust capabilities support all concepts and tangible objects.
Beyond sensory perception Cortica's Autonomous AI interprets complex contextual states with an added layer of predictive AI. This allows the system to place probabilities upon an object’s next possible course of action while simultaneously predicting additional objects likely to enter the frame. This deep understanding is key for both policy and planning.
For a vehicle to position itself accurately it requires a comprehensive and up-to-date visual map of its surroundings. Cortica’s Autonomous AI generates and continually updates a highly detailed map at scale, enabling the car to position itself in space with absolute precision in all driving conditions and scenarios.
Cortica’s Autonomous AI maps visual features to high-dimensional, linear signatures that are a portable and lightweight representational format.
Cortica’s technology can use any visual cue as a landmark, beyond a closed set of objects. This creates a true ‘use - anywhere’ solution that reinforces the localized precision.
Mapping information can be collected from any camera equipped vehicle.
The platform is constantly identifying changes in the environment by comparing existing and new signatures; keeping the map accurate up to the minute.
Cortica's light and universal signature files update the database and can be instantly shared among vehicles to ensure delivery of up-to-date driving information.
This solves in one sweep, the scalability, robustness and update limitations of existing solutions
Localization is achieved by identifying commonalities between image signatures. Cortica can use any reference point as a landmark to position the vehicle.
The signatures that were already generated in the car for sensing purposes are reused and matched with the local cached mapping signatures. Based on the match result the exact location of the car is determined.
Cortica's signature is able to fuse multiple sensor inputs into a single representation space. The fused space leverages the expressive benefits of any added sensor without the limitations of a secondary rule-based fusion
Fusing multiple data sources into a single representation space provides a more robust and full understanding. Utilizing multiple sensors allows the car to handle situations where even a human would have tremendous difficulty- such as torrential downpour or extremely heavy fog. In these extreme circumstances radar, lidar, and audio can provide an added layer of safety.
Barclays estimates that a single autonomous car can generate as much as 100GB of data every second. Applying this to the entire US fleet equates to 5.8 Billion terabytes of raw data per hour.
Gaining visibility into this massive data to discover true insights is the only way to teach an AI to drive. To define and validate policy, big data is required–most notably for the long tail, blind spot behaviors.
The unsupervised AI is able to comb through the tremendous amounts of existing automotive data to detect patterns and cluster data–allowing for searchable functionality and insight analysis. Operators are able to search by text, image, video, or signature.
The Cortica Big Data platform is engineered from the ground up using proprietary signature and CortexTM technology at the base to provide ultra-scale visual data storage and retrieval.
Big data and machine learning functionality allows for:
The Cortica Big Data platform is engineered from the ground up using proprietary signature and CortexTM technology at the base to provide ultra-scale visual data storage and retrieval.
Signatures stored are:
Cortica’s technology recognizes concepts in all conditions, regardless of lighting, weather, obstructions, lack of lane markings or any other situation that could arise.
The generic technology applies simpler and lighter common computational resources to autonomous driving components. The system operates with extremely low power consumption.
The platform is compatible with existing hardware and does not require additional retrofitted components.
The lightweight signature files preserve raw scene information for constant updates and learning. These signatures are shared among vehicles and to the concept database for constant updates.
A manual image annotation process, as employed by other solutions, is not scalable for the big data generated by self-driving vehicles. Cortica’s lightweight, unsupervised approach of bottom-up learning from large scale databases is not limited by the increasing long-tail of edge cases.
Cortica’s architecture utilizes an unannotated array of images and video to learn key concepts from the data and develop contextual, situational understanding. This generic background process allows for all concept types to be covered with no manual training.