Introducing the world’s smallest and most power-efficient Event-based Vision sensor ever released

Build the next generation of smart consumer devices with Prophesee’s GenX320 Metavision® sensor, unlocking new levels of intelligence, autonomy and safety, down to microwatt power levels in an ultra-compact 3x4mm format.

GENX320

KEY FEATURES

Resolution (px) 320×320
Ultra-low power mode (down to 36μW)
Optical format: 1/5”
Pixel latency @1k lux (μs) <150
Dynamic Range (dB) >120
Nominal contrast threshold (%) 25
Pixel size (μm) 6.3 x 6.3
Embedded features: Anti-flicker filtering (AFK) + Event-rate
Controller (ERC) + Spatio-temporal Contrast filter (STC)

TYPICAL APPLICATIONS

AR/VR/XR: Eye tracking • Gesture recognition
IoT: AI on the edge and Machine Learning, Always on cameras
Healthcare (privacy) cameras
Wearables
Smart Home 

PIXEL INTELLIGENCE

Bringing intelligence to the very edge

Inspired by the human retina, at the heart of Prophesee patented Event-Based Metavision sensors, each pixel embeds its own intelligence processing enabling them to activate themselves independently, triggering events.

ULTRA-LOW POWER

Down to 36μW at sensor level

The Metavision sensor’s pixel independence
and intelligent power modes architecture enable new levels of power efficiency starting at just 36μW in ultra-low power and 3mW typical operating power.

PRIVACY BY DESIGN

Events, not images

Prophesee Event-based Metavision sensor does not capture images but events, sparse asynchronous data driven by individual pixels. Also, as it only acquire motion, static scene background is ignored at sensor level.

SPEED

>10k fps Time-Resolution Equivalent

There is no framerate tradeoff anymore. Take full advantage of events over frames and reveal the invisible hidden in hyper fast and fleeting scene dynamics.

 

ULTRA COMPACT

3x4mm

With ultra compact dimensions of only 3x4mmGenX320 is designed to fit your most space-constrained system designs. 

 

DYNAMIC RANGE

>120dB Dynamic Range

Achieve high robustness even in extreme lighting conditions. With Metavision sensors you can now perfectly see details from pitch dark to blinding brightness in one same scene, at any speed.

AI-FRIENDLY

On-chip AI features

Advanced on-chip features enabling native data compatibility with AI accelerators through histogram output straight from the sensor.

LOW LIGHT

0.05 lx Low-Light Cutoff

Sometimes the darkest areas hold the clearest insights. Metavision enables you to see events where light almost does not exist, down to 0.05 lx.

ULTRA LOW DATA

 10 to 1000x less data

With each pixel only reporting when it senses movement, Metavision sensors generate on average 10 to 1000x less data than traditional image-based ones.

A COMPREHENSIVE PRODUCT RANGE TO FIT YOUR DESIGN NEEDS

BUILT FOR EDGE APPLICATIONS

EYE TRACKING

Typical use cases: Foveated rendering, user interaction

Unlock next-generation eye-tracking capabilities with ultra-low power and high refresh rate Metavision® sensors capabilities. Reach 1ms sampling times for ultra smooth eye position tracking while optimizing system autonomy and heating performance.
Video courtesy of ZinnLabs

20mW entire gaze-tracking system

1kHz or more eye position tracking rate

GESTURE RECOGNITION

Typical use cases: Touchless interaction

Achieve high-robustness and smooth gesture recognition and tracking thanks to Metavision® sensors high dynamic range (>120dB), low-light cutoff (0.05lux), high power efficiency (down to μW range) and low latency properties.
Video courtesy of Ultraleap

>120dB dynamic range 

Down to 36 μW power efficiency at sensor level

OBJECT DETECTION & TRACKING

Typical use cases: Always-on cameras

Track moving objects in the field of view. Leverage the low data-rate and sparse information provided by event-based sensors to track objects with low compute power.
Video courtesy of Restar

Continuous tracking in time: no more “blind spots” between frame acquisitions

Native foreground segmentation: analyze only motion, ignore the static background

FALL DETECTION

Typical use cases: AI-enabled monitoring

Detect and classify activities in real time while respecting subject’s privacy at the sensor level. Bring more intelligence to the edge and trigger alerts only on key events such as a person falling in a hospital room while generating 10-1000x less data and benefiting from high robustness to lighting conditions (>120dB dynamic range, 0.05 lux low-light cutoff)
Video courtesy of YunX

Privacy by design: Metavision sensors do not capture images

AI-enabled: Train your models on lighter datasets thanks to background & color invariability event properties

ACTIVE MARKERS

Typical use cases: Constellation tracking

Achieve high-speed LED frequency detection in the 10s of kHz with high tracking precision. Thanks to live frequency analysis, natively filter out parasite flickering light for optimal tracking robustness.

>10kHz High-speed LED frequency detection

Native parasite frequency filtering for optimal tracking robustness

INSIDE-OUT TRACKING

Typical use cases: AR/VR/XR

Unlock ultra-fast and smooth inside-out tracking running at >10kHz and benefit from high robustness to lighting conditions (>120dB dynamic range, 0.05 lux low-light cutoff).

>10kHz high-speed pose estimation

>120dB dynamic range 

Don’t see a use case that fits?
Our team of experts can provide access to additional libraries of privileged content.
Contact us >

EARLY ADOPTERS

“Zinn Labs is developing the next generation of gaze tracking systems built on the unique capabilities of Prophesee’s Metavision event sensors.

 

 

The new GenX320 sensor meets the demands of eye and gaze movements that change on millisecond timescales. Unlike traditional video-based gaze tracking pipelines, Zinn Labs is able to leverage the GenX320 sensor to track features of the eye with a fraction of the power and compute required for full-blown computer vision algorithms, bringing the footprint of the gaze tracking system below 20 mW.

The small package size of the new sensor makes this the first time an event-based vision sensor can be applied to space-constrained head-mounted applications in AR/VR products.

Zinn Labs is happy to be working with Prophesee and the GenX320 sensor as we move towards integrating this new sensor into upcoming customer projects.”

 

Kevin Boyle
CEO & Founder

“Privacy continues to be one of the biggest consumer concerns when vision-based technology is used in our products such as DMS and TV services. Prophesee’s event-based Metavision technology enables us to take our ‘privacy by design’ principle to an even more secure level by allowing scene understanding without the need to have explicit visual representation of the scene.

By capturing only changes in every pixel, rather than the entire scene as with traditional frame-based imaging sensors, our algorithms can derive knowledge to sense what is in the scene, without a detailed representation of it. We have developed a proof-of-concept demo that demonstrates DMS is fully possible using neuromorphic sensors. Using a 1MP neuromorphic sensor we can infer similar performance as an active NIR illumination 2MP vision sensor-based solution.

Going forward, we focus on the GenX320 neuromorphic sensor that can be used in privacy sensitive smart devices to improve user experience.”

 

Petronel Bigioi
Chief Technology Officer

“YunX is a company dedicated to IoT technology and medical applications. With the increasingly aging global population and rapid development of medical technology, the caregiving market is facing unprecedented opportunities. The main challenge lies in how to balance privacy protection and real-time delivery of more accurate and reliable AI monitoring and analysis of the human body. Based on Prophesee’s EVS imaging technology, YunX provides the best solution for this market, which has been functionally validated in its advanced products. With the advent of the fifth-generation GenX320 sensor, the product has achieved a significant new step in terms of compactness, low power consumption and AI capabilities at the edge, while maintaining a significant performance advantage through algorithm innovation.”

 

M K Bao
YunX CEO

“We have seen the benefits of Prophesee’s event-based sensors in enabling hands-free interaction via highly accurate gesture recognition and hand tracking capabilities in UltraLeap’s TouchFree application. Their ability to operate in challenging environmental conditions, at very efficient power levels, and with low system latency enhances the overall user experience and intuitiveness of our touch free UIs.

With the new Genx320 sensor, these benefits of robustness, low power consumption and latency and high dynamic range can be extended to more types of applications and devices, including battery-operated and small form factors systems, proliferating hands-free use cases for increased convenience and ease of use in interacting with all sort of digital content.”

 

 

 

 

 

Tom Carter
CEO & Co-founder

 

CUT TIME TO SOLUTION

EXTENSIVE DOCUMENTATION & SUPPORT

With EVK purchase, get 2H premium support as well as privileged access to our Knowledge Center, including over 110 articles, application notes, in-depth technology discovery material, step-by-step guides.

5X AWARD-WINNING EVENT-BASED VISION SOFTWARE SUITE

Choose from an easy-to-use data visualizer or an advanced API with 95 algorithms, 79 code samples and 24 tutorials.  

Get started today with the most comprehensive Event-based Vision software toolkit to date, for free.

OPEN SOURCE ARCHITECTURE

Metavision SDK is based on an open source architecture, unlocking full interoperability between our software and hardware devices and enabling a fast-growing Event-based community.  

 

ADVANCED TOOLKIT

With Metavision sensor purchase comes a complementary access to an advanced toolkit composed of an online portal, drivers, data player and SDK.

We are sharing an advanced toolkit so you can start building your own vision.

 

METAVISION INTELLIGENCE SUITE

Experience first hand the new performance standards set by Event-Based Vision by interacting with more than 95 algorithms, 79 code samples and 24 tutorials.

Get started today with the most comprehensive Event-Based Vision software toolkit to date, for free.

Q&A

Do I need to buy an EVK to start ?

You don’t necessarily need an Evaluation Kit or Event-based Vision equipment to start your discovery. You can start with Metavision Studio and interact with provided recordings first.

What data do I get from the sensor exactly ?

The sensor will output a continuous stream of data consisting of:

  • X and Y coordinates, indicating the location of the activated pixel in the sensor array
  • The polarity, meaning if the activated event corresponds to a positive (dark to light) or negative (light to dark) contrast change
  • A timestamp “t”, precisely encoding when the event was generated, at the microsecond resolution

For more information, check our pages on Event-based concepts and events streaming and decoding.

Learn more

How can you be “blur-free” ?

Image blur is mostly caused by movement of the camera or the subject during exposure. This can happen when the shutter speed is too slow or if movements are too fast.

With event-based sensor, there is no exposure but rather a continuous flow of “events” triggered by each pixel independently whenever an illumination change is detected. Hence there is no blur.

Can I also get images in addition to events ?

You can not get images directly from our event-based sensor, but for visualization purposes, you can generate frames from the events.

To do so, events will be accumulated over a period of time (usually the frame period, for example 20ms for 50 FPS) because the number of events that occurred at the precise time T (with a microsecond precision) could be very small.

Then a frame can be initialized with its background color at first (e.g. white) and for each event occurring during the frame period, pixels are stored in the frame.

What can I do with the provided software license ?

The suite is provided under commercial license, enabling you to use, build and even sell your own commercial application at no cost. Read the license agreement here.

What is the frame rate ?

There is no frame rate, our Metavision sensor is neither a global shutter nor a rolling shutter, it is actually shutter-free.

This represents a new machine vision category enabled by a patented sensor design that embeds each pixel with its own intelligence processing. It enables them to activate themselves independently when a change is detected.

As soon as an event is generated, it is sent to the system, continuously, pixel by pixel and not at a fixed pace anymore.

How can the dynamic range be so high ?

The pixels of our event-based sensor contain photoreceptors that detect changes of illumination on a logarithmic scale. Hence it automatically adapts itself to low and high light intensity and does not saturate the sensor as a classical frame-based sensor would do.

I have existing image-based datasets, can I use them to train Event-based models ?

Yes, you can leverage our “Video to Event Simulator”. This is Python script that allows you to transform frame-based image or video into Event-based counterparts. Those event based files can then be used to train Event-based models.

Learn more