# MANTA Mainnet Miner Deployment Guide

### 1.          About MANTA

&#x20;

MANTA is a distributed auto-machine learning platform built on the Matrix Mainnet. It is based on an auto-machine learning application (Auto-ML) and its deployment system. It utilises Auto-ML Internet search algorithms to locate an accurate and low-latency deep model and accelerates the searching process through distributed computing.

Currently, you may deploy MANTA either on a Matrix miner or independently. After completing the deployment, you may access the web services trained by the software through public IP address loading port 8502. When MANTA starts operating a new business, the MANTA service platform will keep a record of all users’ public IPs and wallet addresses. The platform will assign learning tasks to users and encourage them to complete the tasks with rewards sent to their wallet addresses.

&#x20;

### 2.          Hardware Requirement

&#x20;

CPU: Intel Xeon series with a base frequency of no less than 2.0 GHz and at least 8 cores

&#x20;

GPU: Nvidia Pascal or higher, video memory of no less than 12 GB

&#x20;

Internet: Internal bandwidth over 10 Gbps, external bandwidth over 10 Gbps

&#x20;

Harddrive: At least 700G SSD storage (500G for storing Matrix Mainnet data and 200G for storing models and training logs)

&#x20;

Storage: At least 32 GB

&#x20;

Network Configuration: A server that can be accessed through public network with an address in the format of http\://{IP}.

(For instance, http\://{IP}:8052. IP part should be accessible through public network.)

&#x20;

### 3. Distributed Auto-ML Web Service Configuration

&#x20;

A.           The project will utilise two datasets for image categorisation and the training of parallax estimation models.

&#x20;

&#x20;      Image categorisation: ImageNet

&#x20;     Link: <https://www.image-net.org/challenges/LSVRC/2012/>

&#x20;

&#x20;     For more details, visit <https://blog.csdn.net/Yuan_mingyu/article/details/123940228>

&#x20;

&#x20;

&#x20;     a. Dataset Download

&#x20;         Download the datasets at <https://www.image-net.org/challenges/LSVRC/index.php>.

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FEivhtJ5yTjZXWExqMus0%2Fimage.png?alt=media&#x26;token=24f5c65c-00e0-4796-a331-b5a7a2549edc" alt=""><figcaption></figcaption></figure>

For image recognition, download the two files in red brackets

Training images (Task 1 & 2)

(<https://image-net.org/data/ILSVRC/2012/ILSVRC2012\\_img\\_train.tar>) Validation images (all tasks)

(<https://image-net.org/data/ILSVRC/2012/ILSVRC2012\\_img\\_val.tar>)

&#x20;

&#x20;    b. Dataset processing

&#x20;    Having downloaded the training and verification datasets, now we need to convert these datasets into a format that can be directly loaded by models.

&#x20;    First, decompress ILSVRC2012\_img\_train.tar to train. The decompressed folder should contain 1,000 tar files, each representing one category of images. The files are named accordingly, so don’t rename the files. Simply decompress these tar files.

&#x20;    Decompress.

&#x20;    mkdir train tar xvf ILSVRC2012\_img\_train.tar -C ./train&#x20;

&#x20;    As there are too many to decompress, write a script unzip.sh as below:

&#x20;

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FeRVwx1z2xBZ2ZqkqEigA%2F02.png?alt=media&#x26;token=491c078c-ac4d-4817-aa36-6e0f7a7e3fa8" alt=""><figcaption></figcaption></figure>

Give the script execution permissions.

&#x20;

chmod +x ./unzip.sh

./unzip.sh&#x20;

&#x20;

&#x20;   As ILSVRC2012\_img\_train.tar may be a bit large, you may delete it afterwards. Move train.tar to train first, before proceeding.

&#x20;

The final training dataset should be something like the following.

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FWwQSDKtZOq6ijkEep3SS%2F%E5%9B%BE%E7%89%87%201.png?alt=media&#x26;token=e8cffa8e-69b0-42f7-b448-41317c43b3cc" alt=""><figcaption></figcaption></figure>

&#x20;

The verification dataset is relatively simple. It only contains 50,000 images. We could simply decompress ILSVRC2012\_img\_val.tar, but for ease of use afterwards, we should divide these images into 1,000 categories. (Just like what we did with the training dataset, create 1,000 folders and put the images in their corresponding categories.)

&#x20;

First decompress.

&#x20;

mkdir val tar xvf ILSVRC2012\_img\_val.tar -C ./val

&#x20;

Enter val. Download the script and execute.\ <br>

cd val\
wget -qO-

<https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep>. sh chmod +x ./valprep.sh\
./valprep.sh rm valprep.sh

&#x20;

The final form of the verification dataset should look like the following screenshot.\ <br>

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FUIib3T1m1a4N2wb7l51Y%2F%E5%9B%BE%E7%89%87%201.png?alt=media&#x26;token=bf66c686-6e49-4b4f-95ef-7f56e039dc30" alt=""><figcaption></figcaption></figure>

&#x20;

&#x20;

After processing, the dataset should be in the following format.

&#x20;

&#x20;

&#x20;![](https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2F87HlNm9LF2c22PTxPnT2%2Fimage.png?alt=media\&token=51f4e2e2-b07d-4f46-a78d-1ab9c0f4a31d)

&#x20;

Parallax estimation: SceneFlow

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FhzXyYCBJ68ar8JuLEvB3%2F03.png?alt=media&#x26;token=2d15059e-edb3-454a-b343-b7779d514dd6" alt=""><figcaption></figcaption></figure>

Link: <https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html>

Load the six datasets in the following screenshot and decompress them.

&#x20;

&#x20;

After downloading is complete, modify the route through the following method for easy recognition by web services.

(1)  ImageNet

/mnt/imagenet \
\|&#x20;

|—train

\|--val\ <br>

(2)  SceneFlow

/mnt/SceneFlow \
\|&#x20;

\|--FlyingThings3D&#x20;

\|--Monkaa&#x20;

\|--Driving \ <br>

B. Launching Web Service Training

&#x20;

(1) Download dist-automl via <https://www.matrix.io/assets/dropdown/dist-automl1.1.zip>

&#x20;

(2) Decompress dist-automl.

&#x20;

Command: Unzip -d dist-automl

&#x20;

(3) Enter dist-automl catalogue.

&#x20;

Command: cd dist-automl

&#x20;

(2) Install the python database required by the project.

&#x20;

Command: pip install -r requirements.txt

&#x20;

(3) Execute the “launch service” order.

Command: streamlit run training\_manager.py

&#x20;

If the Terminal should show the following, it means the service has been successfully launched.

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2Fs6bOfGc2nYvSgKgdvvIn%2F04.png?alt=media&#x26;token=26cc4ad0-62df-40bd-a944-1dbb37c0da70" alt=""><figcaption></figcaption></figure>

Replace the part in the red bracket with the public IP of the server where you wish to deploy the service, and you will be able to access the training web service.&#x20;

&#x20;

You may also offer your deployed service as a distributed Auto-ML service on the Mainnet.

&#x20;

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FcUVVUXgkmwVGmEJGStzB%2F05.png?alt=media&#x26;token=5f56a9d0-ebba-49c8-9a95-767f0875a701" alt=""><figcaption></figcaption></figure>

&#x20;

&#x20;

## How to Create a MANTA Wallet

Download the latest update: dist\_automl1.1.zip，

Download link: <https://www.matrix.io/assets/dropdown/dist-automl1.1.zip>

&#x20;

1: Creating your own MANTA wallet

Create a new wallet at wallet.matrix.io. Keep your private key/password/mnemonic safe.

&#x20;

2: Running crypto wallet software on the MANTA server

![](https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FkQGNZc2voUL0ndKlekiU%2F%E5%9B%BE%E7%89%87%201.png?alt=media\&token=e3541319-af2b-46f7-98af-50486c2e663c)

a. Copy the wallet address to the Address box to encrypt

b. Write down the password and encrypted message and send them to \*\*@matrix.io.

&#x20;

3: Copy the encrypted message to Wallet\_info.md in the project directory

&#x20;

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2F1xXLG6yv6virtVLTf5tu%2F%E5%9B%BE%E7%89%87%201.png?alt=media&#x26;token=60a49248-cb6a-432a-b554-44ebd55c5848" alt=""><figcaption></figcaption></figure>

4: Launch web\_service and view your wallet

Launch service: streamlit. Run web\_service.py

Enter the password to view wallet information.

<figure><img src="https://2664391676-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FT5LtcFS1DoKk05KGaZdm%2Fuploads%2FYsVYSOwxp3Z7NopxQpz4%2F%E5%9B%BE%E7%89%87%201.png?alt=media&#x26;token=a11abf9c-6279-4358-9c29-2ccf44f25354" alt=""><figcaption></figcaption></figure>
