It’s not enough to recognize
a traffic light 99% of the time.

Cortica’s platform provides protection against the edge cases for intelligent, safe driving–leading the way towards full autonomy.

play

See it in action

Understanding the world
around the vehicle


Cortica’s revolutionary automotive visual intelligence platform is built on the foundation of a mature, patented, self-learning technology. The robust signature based representation and bottom-up, fine-grain, unsupervised learning capabilities enable a more detailed and precise interpretation of the car’s surroundings. The lightweight and efficient computational framework is fortifying autonomous vehicles with the power of AI. Cortica's AI is at the core of four product lines addressing the complexities of driving, propelling automotive capabilities towards full autonomy.

01
Perception

ENVIRONMENTAL MODEL

The Cortica platform garners a deep understanding of the car’s environment by immediately recognizing generic and granular classes of objects to the level of full scene reconstruction and prediction.The technology recognizes 10,000+ fine grain concepts.

perception-mob

The System Recognizes
Vehicles & Trucks | Bicycles & Motorcycles | Pedestrians | Complex Contextual States | Motion States | And thousands more

With fine grain recognition the system identifies everything from pedestrians with baby strollers, to hoverboards, to individuals walking while looking at their smartphone. The robust capabilities support all key concepts.

Beyond sensing the Cortica platform is able to interpret complex contextual states with an added layer of predictive AI. This allows the system to place probabilities upon an object’s next possible action while simultaneously predicting additional objects likely to enter the frame. This deep understanding is key for both policy and planning.

perception
02
Localization

Mapping and Localization

The Cortica platform garners a deep understanding of the car’s environment by immediately recognizing generic and granular classes of objects to the level of full scene reconstruction and prediction.The technology recognizes 10,000+ fine grain concepts.

For a vehicle to position itself accurately it requires a comprehensive and up-to-date visual map of its surroundings. The challenge is to generate and constantly update a highly detailed map at scale, and enable the car to position itself in space with absolute precision in all driving conditions.

cortex technology

Cortica’s Cortex® technology maps visual features to high-dimensional, linear signatures that are a portable and lightweight representation format.

SIGNATURE BASED technology

all locations

Cortica’s technology can use any visual cue as a landmark, not only a closed set of objects. This creates a true ‘use - anywhere’ solution that reinforces the localization precision.

Crowdsource Mapping

Mapping information can be collected from any camera equipped vehicle.

Up-To-Date Mapping

The platform is constantly identifying changes in the environment by comparing existing and new signatures–keeping the map accurate up to the minute.

Cortica's light and universal signature files update the database and can be instantly shared among vehicles to ensure up-to-date driving.

This solves in one sweep, the scalability, robustness and update limitations of existing solutions

localization
car-graphic

Signature Matching Pose Reconstruction

signature

Localization is achieved by identifying commonalities between image signatures. Cortica can use any reference point as a landmark to position the vehicle.

The signatures that were already generated in the car for sensing purposes are reused and matched with the local cached mapping signatures. Based on the match result the exact location of the car is determined.

03
Sensor Fusion

Cortica's signature is able to fuse multiple sensor inputs into a single representation space. The fused space leverages the expressive benefits of any added sensor without the limitations of a secondary rule-based fusion

Fusing multiple data sources into a single representation space provides a more robust and full understanding. Utilizing multiple sensors allows the car to handle situations where even a human would have tremendous difficulty- such as torrential downpour or extremely heavy fog. In these extreme circumstances radar, lidar, and audio can provide an added layer of safety.

sensor
sensor-mobile
04
Big Data

Barclays estimates that a single autonomous car can generate as much as 100GB of data every second. Applying this to the entire US fleet equates to 5.8 Billion terabytes of raw data per hour.

Gaining visibility into this massive data to discover true insights is the only way to teach an AI to drive. To define and validate policy, big data is required–most notably for the long tail, blind spot behaviors.

car-img

The unsupervised AI is able to comb through the tremendous amounts of existing automotive data to detect patterns and cluster data–allowing for searchable functionality and insight analysis. Operators are able to search by text, image, video, or signature.

The Cortica Big Data platform is engineered from the ground up using proprietary signature and CortexTM technology at the base to provide ultra-scale visual data storage and retrieval.

Big data and machine learning functionality allows for:

  • Search by image/frame
  • Data clustering and organization
  • Text to image/video search

The Cortica Big Data platform is engineered from the ground up using proprietary signature and CortexTM technology at the base to provide ultra-scale visual data storage and retrieval.

Signatures stored are:

  • Highly distributable
  • Cortex compression
  • Sublinear Database growth
Features

All Conditions

Cortica’s technology recognizes concepts in all conditions, regardless of lighting, weather, obstructions, lack of lane markings and more.

Powerful And Lightweight

The generic technology applies simpler and lighter common computational resources to autonomous driving components. The system operates with extremely low power consumption.

Hardware Agnostic

The platform is compatible with existing hardware and does not require additional retrofitted components.

Portable and universal

The lightweight signature files preserve raw scene information for constant updates and learning. These signatures are shared among vehicles and to the concept database for constant updates.

Scalable

A manual image annotation process, as employed by other solutions, is not scalable for the big data generated by self-driving vehicles. Cortica’s lightweight, unsupervised approach of bottom-up learning from large scale databases is not limited by the increasing long-tail of edge cases.

Unsupervised Learning

Cortica’s architecture utilizes an unannotated array of images and video to learn key concepts from the data and develop contextual, situational understanding. This generic background process allows for all concept types to be covered with no manual training.