MATRIX 3.0 Phase 1 Stage 1 Deliverables — 1

July 30, 2023

What is this?

This demonstration showcases how we collect relevant brainwave data from the user’s brain using a non-invasive brainwave sensing chip. Then, using our artificial intelligence algorithm, we separate the brainwave data related to attention. We use this portion of the brainwave data to control a programmable toy car. When the user’s attention is more focused, the energy value of this brainwave increases, resulting in faster movement of the car. Conversely, when the user’s attention decreases, the energy value of the attention-related brainwave decreases, causing the speed of the car to decrease.

What does it mean?

In the entire analysis of non-invasive data, attention is the first deterministic brainwave data that we separate. We have innovatively used a quantifiable model to express the energy field of attention strength, allowing for intuitive demonstration through the use of a toy car.

Currently, our modeling plans for the first phase of Matrix 3.0 include brainwave data modeling and training of large language models. Attention is an important data dimension in brainwave data modeling, effectively expressing the user’s level of concentration and reflecting the energy level of their overall thinking inertia. At the same time, it is the first deterministic numerical dimension introduced in brainwave data modeling. After attention modeling is completed, we will gradually introduce other numerical dimensions.

How does it work?

  1. Collect brainwave data using the non-invasive brainwave sensing chip.

  2. Apply the artificial intelligence algorithm to separate the brainwave data related to attention.

  3. Quantify the energy field of attention strength using a model.

  4. Use the quantified attention data to control a programmable toy car, where increased attention leads to faster car movement and decreased attention leads to slower car movement.

What are the next plans?

  1. Further explore additional dimensions of attention data and information.

  2. Begin developing applications related to attention, such as training tools to enhance attention.

  3. Incorporate attention as a dimension in the Avatar Intelligence model and conduct modeling and training related to it.

  4. Analyze and separate other data from brainwaves, such as emotional expressions.

The Matrix AI Network was founded in 2017. In 2023, we enter Matrix 3.0 blending neuroscience with our previous work to realize the vision of the Matrix films.

FOLLOW MATRIX:

Website | GitHub | Twitter | YouTube

Telegram (Official) | Telegram (Masternodes)

Owen Tao (CEO) | Steve Deng (Chief AI Scientist) | Eric Choy (CMTO)

Last updated