MANTA Mainnet Miner Deployment Guide

February 17, 2023

1. About MANTA

MANTA is a distributed auto-machine learning platform built on the Matrix Mainnet. It is based on an auto-machine learning application (Auto-ML) and its deployment system. It utilises Auto-ML Internet search algorithms to locate an accurate and low-latency deep model and accelerates the searching process through distributed computing.

Currently, you may deploy MANTA either on a Matrix miner or independently. After completing the deployment, you may access the web services trained by the software through public IP address loading port 8502. When MANTA starts operating a new business, the MANTA service platform will keep a record of all users’ public IPs and wallet addresses. The platform will assign learning tasks to users and encourage them to complete the tasks with rewards sent to their wallet addresses.

2. Hardware Requirement

CPU: Intel Xeon series with a base frequency of no less than 2.0 GHz and at least 8 cores

GPU: Nvidia Pascal or higher, video memory of no less than 12 GB

Internet: Internal bandwidth over 10 Gbps, external bandwidth over 10 Gbps

Harddrive: At least 700G SSD storage (500G for storing Matrix Mainnet data and 200G for storing models and training logs)

Storage: At least 32 GB

Network Configuration: A server that can be accessed through public network with an address in the format of http://{IP}.

(For instance, http://{IP}:8052. IP part should be accessible through public network.)

3. Distributed Auto-ML Web Service Configuration

A. The project will utilise two datasets for image categorisation and the training of parallax estimation models.

Image categorisation: ImageNet

Link: https://www.image-net.org/challenges/LSVRC/2012/

For more details, visit https://blog.csdn.net/Yuan_mingyu/article/details/123940228

a. Dataset Download

Download the datasets at https://www.image-net.org/challenges/LSVRC/index.php.

For image recognition, download the two files in red brackets

Training images (Task 1 & 2)

(https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar) Validation images (all tasks)

(https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar)

b. Dataset processing

Having downloaded the training and verification datasets, now we need to convert these datasets into a format that can be directly loaded by models.

First, decompress ILSVRC2012_img_train.tar to train. The decompressed folder should contain 1,000 tar files, each representing one category of images. The files are named accordingly, so don’t rename the files. Simply decompress these tar files.

Decompress.

mkdir train tar xvf ILSVRC2012_img_train.tar -C ./train

As there are too many to decompress, write a script unzip.sh as below:

Give the script execution permissions.

chmod +x ./unzip.sh

./unzip.sh

As ILSVRC2012_img_train.tar may be a bit large, you may delete it afterwards. Move train.tar to train first, before proceeding.

The final training dataset should be something like the following.

The verification dataset is relatively simple. It only contains 50,000 images. We could simply decompress ILSVRC2012_img_val.tar, but for ease of use afterwards, we should divide these images into 1,000 categories. (Just like what we did with the training dataset, create 1,000 folders and put the images in their corresponding categories.)

First decompress.

mkdir val tar xvf ILSVRC2012_img_val.tar -C ./val

Enter val. Download the script and execute.

cd val wget -qO-

https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep. sh chmod +x ./valprep.sh ./valprep.sh rm valprep.sh

The final form of the verification dataset should look like the following screenshot.

After processing, the dataset should be in the following format.

Parallax estimation: SceneFlow

Link: https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html

Load the six datasets in the following screenshot and decompress them.

After downloading is complete, modify the route through the following method for easy recognition by web services.

(1) ImageNet

/mnt/imagenet |

|—train

|--val

(2) SceneFlow

/mnt/SceneFlow |

|--FlyingThings3D

|--Monkaa

|--Driving

B. Launching Web Service Training

(1) Download dist-automl via https://www.matrix.io/assets/dropdown/dist-automl1.1.zip

(2) Decompress dist-automl.

Command: Unzip -d dist-automl

(3) Enter dist-automl catalogue.

Command: cd dist-automl

(2) Install the python database required by the project.

Command: pip install -r requirements.txt

(3) Execute the “launch service” order.

Command: streamlit run training_manager.py

If the Terminal should show the following, it means the service has been successfully launched.

Replace the part in the red bracket with the public IP of the server where you wish to deploy the service, and you will be able to access the training web service.

You may also offer your deployed service as a distributed Auto-ML service on the Mainnet.

How to Create a MANTA Wallet

Download the latest update: dist_automl1.1.zip,

Download link: https://www.matrix.io/assets/dropdown/dist-automl1.1.zip

1: Creating your own MANTA wallet

Create a new wallet at wallet.matrix.io. Keep your private key/password/mnemonic safe.

2: Running crypto wallet software on the MANTA server

a. Copy the wallet address to the Address box to encrypt

b. Write down the password and encrypted message and send them to **@matrix.io.

3: Copy the encrypted message to Wallet_info.md in the project directory

4: Launch web_service and view your wallet

Launch service: streamlit. Run web_service.py

Enter the password to view wallet information.

Last updated