manual transfer done for documentation, still hero issues for parsing

This commit is contained in:
mik-tf 2024-04-15 21:57:46 +00:00
parent 99c05100c3
commit b63f091e63
536 changed files with 20490 additions and 0 deletions

View File

@ -0,0 +1,127 @@
<h1>Commands</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Work on Docs](#work-on-docs)
- [To start the GridProxy server](#to-start-the-gridproxy-server)
- [Run tests](#run-tests)
***
## Introduction
The Makefile makes it easier to do mostly all the frequently commands needed to work on the project.
## Work on Docs
we are using [swaggo/swag](https://github.com/swaggo/swag) to generate swagger docs based on the annotation inside the code.
- install swag executable binary
```bash
go install github.com/swaggo/swag/cmd/swag@latest
```
- now if you check the binary directory inside go directory you will find the executable file.
```bash
ls $(go env GOPATH)/bin
```
- to run swag you can either use the full path `$(go env GOPATH)/bin/swag` or export go binary to `$PATH`
```bash
export PATH=$PATH:$(go env GOPATH)/bin
```
- use swag to format code comments.
```bash
swag fmt
```
- update the docs
```bash
swag init
```
- to parse external types from vendor
```bash
swag init --parseVendor
```
- for a full generate docs command
```bash
make docs
```
## To start the GridProxy server
After preparing the postgres database you can `go run` the main file in `cmds/proxy_server/main.go` which responsible for starting all the needed server/clients.
The server options
| Option | Description |
|---|---|
| -address | Server ip address (default `":443"`) |
| -ca | certificate authority used to generate certificate (default `"https://acme-staging-v02.api.letsencrypt.org/directory"`) |
| -cert-cache-dir | path to store generated certs in (default `"/tmp/certs"`) |
| -domain | domain on which the server will be served |
| -email | email address to generate certificate with |
| -log-level | log level |
| -no-cert | start the server without certificate |
| -postgres-db | postgres database |
| -postgres-host | postgres host |
| -postgres-password | postgres password |
| -postgres-port | postgres port (default 5432) |
| -postgres-user | postgres username |
| -tfchain-url | tF chain url (default `"wss://tfchain.dev.grid.tf/ws"`) |
| -relay-url | RMB relay url (default`"wss://relay.dev.grid.tf"`) |
| -mnemonics | Dummy user mnemonics for relay calls |
| -v | shows the package version |
For a full server setup:
```bash
make restart
```
## Run tests
There is two types of tests in the project
- Unit Tests
- Found in `pkg/client/*_test.go`
- Run with `go test -v ./pkg/client`
- Integration Tests
- Found in `tests/queries/`
- Run with:
```bash
go test -v \
--seed 13 \
--postgres-host <postgres-ip> \
--postgres-db tfgrid-graphql \
--postgres-password postgres \
--postgres-user postgres \
--endpoint <server-ip> \
--mnemonics <insert user mnemonics>
```
- Or to run a specific test you can append the previous command with
```bash
-run <TestName>
```
You can found the TestName in the `tests/queries/*_test.go` files.
To run all the tests use
```bash
make test-all
```

View File

@ -0,0 +1,55 @@
<h1>Contributions Guide</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Project structure](#project-structure)
- [Internal](#internal)
- [Pkg](#pkg)
- [Writing tests](#writing-tests)
***
## Introduction
We propose a quick guide to learn how to contribute.
## Project structure
The main structure of the code base is as follows:
- `charts`: helm chart
- `cmds`: includes the project Golang entrypoints
- `docs`: project documentation
- `internal`: contains the explorer API logic and the cert manager implementation, this where most of the feature work will be done
- `pkg`: contains client implementation and shared libs
- `tests`: integration tests
- `tools`: DB tools to prepare the Postgres DB for testing and development
- `rootfs`: ZOS root endpoint that will be mounted in the docker image
### Internal
- `explorer`: contains the explorer server logic:
- `db`: the db connection and operations
- `mw`: defines the generic action mount that will be be used as http handler
- `certmanager`: logic to ensure certificates are available and up to date
`server.go` includes the logic for all the API operations.
### Pkg
- `client`: client implementation
- `types`: defines all the API objects
## Writing tests
Adding a new endpoint should be accompanied with a corresponding test. Ideally every change or bug fix should include a test to ensure the new behavior/fix is working as intended.
Since these are integration tests, you need to first make sure that your local db is already seeded with the ncessary data. See tools [doc](./db_testing.md) for more information about how to prepare your db.
Testing tools offer two clients that are the basic of most tests:
- `local`: this client connects to the local db
- `proxy client`: this client connects to the running local instance
You need to start an instance of the server before running the tests. Check [here](./commands.md) for how to start.

View File

@ -0,0 +1,21 @@
<h1>Database</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Max Open Connections](#max-open-connections)
***
## Introduction
The grid proxy has access to a postgres database containing information about the tfgrid, specifically information about grid nodes, farms, twins, and contracts.\
The database is filled/updated by this [indexer](https://github.com/threefoldtech/tfchain_graphql).
The grid proxy mainly retrieves information from the db with a few modifications for efficient retrieval (e.g. adding indices, caching node gpus, etc..).
## Max Open Connections
The postgres database can handle 100 open connections concurrently (that is the default value set by postgres), this number can be increased, depending on the infrastructure, by modifying it in the postgres.conf file where the db is deployed, or by executing the following query `ALTER system SET max_connections=size-of-connection`, but this requires a db restart to take effect.\
The explorer creates a connection pool to the postgres db, with a max open pool connections set to a specific number (currently 80).\
It's important to distinguish between the database max connections, and the max pool open connections, because if the pool did not have any constraints, it would try to open as many connections as it wanted, without any notion of the maximum connections the database accepts. It's the database responsibility then to accept or deny the connection.\
This is why the max number of open pool connections is set to 80: It's below the max connections the database could handle (100), and it gives room for other actors outside of the explorer to open connections with the database.\

View File

@ -0,0 +1,45 @@
<h1>DB for testing</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Run postgresql container](#run-postgresql-container)
- [Create the DB](#create-the-db)
- [Method 1: Generate a db with relevant schema using the db helper tool:](#method-1-generate-a-db-with-relevant-schema-using-the-db-helper-tool)
- [Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run:](#method-2-fill-the-db-from-a-production-db-dump-file-for-example-if-you-have-dumpsql-file-you-can-run)
***
## Introduction
We show how to use a database for testing.
## Run postgresql container
```bash
docker run --rm --name postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=tfgrid-graphql \
-p 5432:5432 -d postgres
```
## Create the DB
you can either Generate a db with relevant schema to test things locally quickly, or load a previously taken DB dump file:
### Method 1: Generate a db with relevant schema using the db helper tool:
```bash
cd tools/db/ && go run . \
--postgres-host 127.0.0.1 \
--postgres-db tfgrid-graphql \
--postgres-password postgres \
--postgres-user postgres \
--reset \
```
### Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run:
```bash
psql -h 127.0.0.1 -U postgres -d tfgrid-graphql < dump.sql
```

View File

@ -0,0 +1,38 @@
<h1>The Grid Explorer</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Explorer Overview](#explorer-overview)
- [Explorer Endpoints](#explorer-endpoints)
***
## Introduction
The Grid Explorer is a rest API used to index a various information from the TFChain.
## Explorer Overview
- Due to limitations on indexing information from the blockchain, Complex inter-tables queries and limitations can't be applied directly on the chain.
- Here comes the TFGridDB, a shadow database contains all the data on the chain which is being updated each 2 hours.
- Then the explorer can apply a raw SQL queries on the database with all limitations and filtration needed.
- The used technology to extract the info from the blockchain is Subsquid check the [repo](https://github.com/threefoldtech/tfchain_graphql).
## Explorer Endpoints
| HTTP Verb | Endpoint | Description |
| --------- | --------------------------- | ---------------------------------- |
| GET | `/contracts` | Show all contracts on the chain |
| GET | `/farms` | Show all farms on the chain |
| GET | `/gateways` | Show all gateway nodes on the grid |
| GET | `/gateways/:node_id` | Get a single gateway node details |
| GET | `/gateways/:node_id/status` | Get a single node status |
| GET | `/nodes` | Show all nodes on the grid |
| GET | `/nodes/:node_id` | Get a single node details |
| GET | `/nodes/:node_id/status` | Get a single node status |
| GET | `/stats` | Show the grid statistics |
| GET | `/twins` | Show all the twins on the chain |
| GET | `/nodes/:node_id/statistics`| Get a single node ZOS statistics |
For the available filters on each node. check `/swagger/index.html` endpoint on the running instance.

View File

@ -0,0 +1,117 @@
<h1>Running Proxy in Production</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Production Run](#production-run)
- [To upgrade the machine](#to-upgrade-the-machine)
- [Dockerfile](#dockerfile)
- [Update helm package](#update-helm-package)
- [Install the chart using helm package](#install-the-chart-using-helm-package)
***
## Introduction
We show how to run grid proxy in production.
## Production Run
- Download the latest binary [here](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-client)
- add the execution permission to the binary and move it to the bin directory
```bash
chmod +x ./gridproxy-server
mv ./gridproxy-server /usr/local/bin/gridproxy-server
```
- Add a new systemd service
```bash
cat << EOF > /etc/systemd/system/gridproxy-server.service
[Unit]
Description=grid proxy server
After=network.target
[Service]
ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics <insert user mnemonics>
Type=simple
Restart=always
User=root
Group=root
[Install]
WantedBy=multi-user.target
Alias=gridproxy.service
EOF
```
- enable the service
```bash
systemctl enable gridproxy.service
```
- start the service
```bash
systemctl start gridproxy.service
```
- check the status
```bash
systemctl status gridproxy.service
```
- The command options:
- domain: the host domain which will generate ssl certificate to.
- email: the mail used to run generate the ssl certificate.
- ca: certificate authority server url, e.g.
- let's encrypt staging: `https://acme-staging-v02.api.letsencrypt.org/directory`
- let's encrypt production: `https://acme-v02.api.letsencrypt.org/directory`
- postgres -\*: postgres connection info.
## To upgrade the machine
- just replace the binary with the new one and apply
```bash
systemctl restart gridproxy-server.service
```
- it you have changes in the `/etc/systemd/system/gridproxy-server.service` you have to run this command first
```bash
systemctl daemon-reload
```
## Dockerfile
To build & run dockerfile
```bash
docker build -t threefoldtech/gridproxy .
docker run --name gridproxy -e POSTGRES_HOST="127.0.0.1" -e POSTGRES_PORT="5432" -e POSTGRES_DB="db" -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="password" -e MNEMONICS="<insert user mnemonics>" threefoldtech/gridproxy
```
## Update helm package
- Do `helm lint charts/gridproxy`
- Regenerate the packages `helm package -u charts/gridproxy`
- Regenerate index.yaml `helm repo index --url https://threefoldtech.github.io/tfgridclient_proxy/ .`
- Push your changes
## Install the chart using helm package
- Adding the repo to your helm
```bash
helm repo add gridproxy https://threefoldtech.github.io/tfgridclient_proxy/
```
- install a chart
```bash
helm install gridproxy/gridproxy
```

View File

@ -0,0 +1,149 @@
<h1> Introducing Grid Proxy </h1>
<h2> Table of Content</h2>
- [About](#about)
- [How to Use the Project](#how-to-use-the-project)
- [Used Technologies \& Prerequisites](#used-technologies--prerequisites)
- [Start for Development](#start-for-development)
- [Setup for Production](#setup-for-production)
- [Get and Install the Binary](#get-and-install-the-binary)
- [Add as a Systemd Service](#add-as-a-systemd-service)
***
<!-- About -->
## About
The TFGrid client Proxy acts as an interface to access information about the grid. It supports features such as filtering, limitation, and pagination to query the various entities on the grid like nodes, contracts and farms. Additionally the proxy can contact the required twin ID to retrieve stats about the relevant objects and performing ZOS calls.
The proxy is used as the backend of several threefold projects like:
- [Dashboard](../../dashboard/dashboard.md)
<!-- Usage -->
## How to Use the Project
If you don't want to care about setting up your instance you can use one of the live instances. each works against a different TFChain network.
- Dev network: <https://gridproxy.dev.grid.tf>
- Swagger: <https://gridproxy.dev.grid.tf/swagger/index.html>
- Qa network: <https://gridproxy.qa.grid.tf>
- Swagger: <https://gridproxy.qa.grid.tf/swagger/index.html>
- Test network: <https://gridproxy.test.grid.tf>
- Swagger: <https://gridproxy.test.grid.tf/swagger/index.html>
- Main network: <https://gridproxy.grid.tf>
- Swagger: <https://gridproxy.grid.tf/swagger/index.html>
Or follow the [development guide](#start-for-development) to run yours.
By default, the instance runs against devnet. to configure that you will need to config this while running the server.
> Note: You may face some differences between each instance and the others. that is normal because each network is in a different stage of development and works correctly with others parts of the Grid on the same network.
<!-- Prerequisites -->
## Used Technologies & Prerequisites
1. **GoLang**: Mainly the two parts of the project written in `Go 1.17`, otherwise you can just download the compiled binaries from github [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases)
2. **Postgresql**: Used to load the TFGrid DB
3. **Docker**: Containerize the running services such as Postgres and Redis.
4. **Mnemonics**: Secret seeds for adummy identity to use for the relay client.
For more about the prerequisites and how to set up and configure them. follow the [Setup guide](./setup.md)
<!-- Development -->
## Start for Development
To start the services for development or testing make sure first you have all the [Prerequisites](#used-technologies--prerequisites).
- Clone this repo
```bash
git clone https://github.com/threefoldtech/tfgrid-sdk-go.git
cd tfgrid-sdk-go/grid-proxy
```
- The `Makefile` has all that you need to deal with Db, Explorer, Tests, and Docs.
```bash
make help # list all the available subcommands.
```
- For a quick test explorer server.
```bash
make all-start e=<MNEMONICS>
```
Now you can access the server at `http://localhost:8080`
- Run the tests
```bash
make test-all
```
- Generate docs.
```bash
make docs
```
To run in development environment see [here](./db_testing.md) how to generate test db or load a db dump then use:
```sh
go run cmds/proxy_server/main.go --address :8080 --log-level debug -no-cert --postgres-host 127.0.0.1 --postgres-db tfgrid-graphql --postgres-password postgres --postgres-user postgres --mnemonics <insert user mnemonics>
```
Then visit `http://localhost:8080/<endpoint>`
For more illustrations about the commands needed to work on the project, see the section [Commands](./commands.md). For more info about the project structure and contributions guidelines check the section [Contributions](./contributions.md).
<!-- Production-->
## Setup for Production
## Get and Install the Binary
- You can either build the project:
```bash
make build
chmod +x cmd/proxy_server/server \
&& mv cmd/proxy_server/server /usr/local/bin/gridproxy-server
```
- Or download a release:
Check the [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) page and edit the next command with the chosen version.
```bash
wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v1.6.7-rc2/tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \
&& tar -xzf tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \
&& chmod +x server \
&& mv server /usr/local/bin/gridproxy-server
```
## Add as a Systemd Service
- Create the service file
```bash
cat << EOF > /etc/systemd/system/gridproxy-server.service
[Unit]
Description=grid proxy server
After=network.target
[Service]
ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --substrate wss://tfchain.dev.grid.tf/ws --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics <insert user mnemonics>
Type=simple
Restart=always
User=root
Group=root
[Install]
WantedBy=multi-user.target
Alias=gridproxy.service
EOF
```

View File

@ -0,0 +1,25 @@
<h1>Grid Proxy</h1>
Welcome to the *Grid Proxy* section of the TFGrid Manual!
In this comprehensive guide, we delve into the intricacies of the ThreeFold Grid Proxy, a fundamental component that empowers the ThreeFold Grid ecosystem.
This section is designed to provide users, administrators, and developers with a detailed understanding of the TFGrid Proxy, offering step-by-step instructions for its setup, essential commands, and insights into its various functionalities.
The Grid Proxy plays a pivotal role in facilitating secure and efficient communication between nodes within the ThreeFold Grid, contributing to the decentralized and autonomous nature of the network.
Whether you are a seasoned ThreeFold enthusiast or a newcomer exploring the decentralized web, this manual aims to be your go-to resource for navigating the ThreeFold Grid Proxy landscape.
To assist you on your journey, we have organized the content into distinct chapters below, covering everything from initial setup procedures and database testing to practical commands, contributions, and insights into the ThreeFold Explorer and the Grid Proxy Database functionalities.
<h2>Table of Contents</h2>
- [Introducing Grid Proxy](./proxy.md)
- [Setup](./setup.md)
- [DB Testing](./db_testing.md)
- [Commands](./commands.md)
- [Contributions](./contributions.md)
- [Explorer](./explorer.md)
- [Database](./database.md)
- [Production](./production.md)
- [Release](./release.md)

View File

@ -0,0 +1,32 @@
<h1>Release Grid-Proxy</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Steps](#steps)
- [Debugging](#debugging)
***
## Introduction
We show the steps to release a new version of the Grid Proxy.
## Steps
To release a new version of the Grid-Proxy component, follow these steps:
Update the `appVersion` field in the `charts/Chart.yaml` file. This field should reflect the new version number of the release.
The release process includes generating and pushing a Docker image with the latest GitHub tag. This step is automated through the `gridproxy-release.yml` workflow.
Trigger the `gridproxy-release.yml` workflow by pushing the desired tag to the repository. This will initiate the workflow, which will generate the Docker image based on the tag and push it to the appropriate registry.
## Debugging
In the event that the workflow does not run automatically after pushing the tag and making the release, you can manually execute it using the GitHub Actions interface. Follow these steps:
Go to the [GitHub Actions page](https://github.com/threefoldtech/tfgrid-sdk-go/actions/workflows/gridproxy-release.yml) for the Grid-Proxy repository.
Locate the workflow named gridproxy-release.yml.
Trigger the workflow manually by selecting the "Run workflow" option.

View File

@ -0,0 +1,50 @@
<h1>Setup</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Install Golang](#install-golang)
- [Docker](#docker)
- [Postgres](#postgres)
- [Get Mnemonics](#get-mnemonics)
***
## Introduction
We show how to set up grid proxy.
## Install Golang
To install Golang, you can follow the official [guide](https://go.dev/doc/install).
## Docker
Docker is useful for running the TFGridDb in container environment. Read this to [install Docker engine](../../system_administrators/computer_it_basics/docker_basics.md#install-docker-desktop-and-docker-engine).
Note: it will be necessary to follow step #2 in the previous article to run docker without sudo. if you want to avoid that. edit the docker commands in the `Makefile` and add sudo.
## Postgres
If you have docker installed you can run postgres on a container with:
```bash
make db-start
```
Then you can either load a dump of the database if you have one:
```bash
make db-dump p=~/dump.sql
```
or easier you can fill the database tables with randomly generated data with the script `tools/db/generate.go` to do that run:
```bash
make db-fill
```
## Get Mnemonics
1. Install [polkadot extension](https://github.com/polkadot-js/extension) on your browser.
2. Create a new account from the extension. It is important to save the seeds.

View File

@ -0,0 +1,95 @@
<h1> ThreeFold Chain <h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Twins](#twins)
- [Farms](#farms)
- [Nodes](#nodes)
- [Node Contract](#node-contract)
- [Rent Contract](#rent-contract)
- [Name Contract](#name-contract)
- [Contract billing](#contract-billing)
- [Contract locking](#contract-locking)
- [Contract grace period](#contract-grace-period)
- [DAO](#dao)
- [Farming Policies](#farming-policies)
- [Node Connection price](#node-connection-price)
- [Node Certifiers](#node-certifiers)
***
## Introduction
ThreeFold Chain (TFChain) is the base layer for everything that interacts with the grid. Nodes, farms, users are registered on the chain. It plays the central role in achieving decentralised consensus between a user and Node to deploy a certain workload. A contract can be created on the chain that is essentially an agreement between a node and user.
## Twins
A twin is the central Identity object that is used for every entity that lives on the grid. A twin optionally has an IPV6 planetary network address which can be used for communication between twins no matter of the location they are in. A twin is coupled to a private/public keypair on chain. This keypair can hold TFT on TF Chain.
## Farms
A farm must be created before a Node can be booted. Every farms needs to have an unique name and is linked to the Twin that creates the farm. Once a farm is created, a unique ID is generated. This ID can be used to provide to the boot image of a Node.
## Nodes
When a node is booted for the first time, it registers itself on the chain and a unique identity is generated for this Node.
## Node Contract
A node contract is a contract between a user and a Node to deploy a certain workload. The contract is specified as following:
```
{
"contract_id": auto generated,
"node_id": unique id of the node,
"deployment_data": some additional deployment data
"deployment_hash": hash of the deployment definition signed by the user
"public_ips": number of public ips to attach to the deployment contract
}
```
We don't save the raw workload definition on the chain but only a hash of the definition. After the contract is created, the user must send the raw deployment to the specified node in the contract. He can find where to send this data by looking up the Node's twin and contacting that twin over the planetary network.
## Rent Contract
A rent contract is also a contract between a user and a Node, but instead of being able to reserve a part of the node's capacity, the full capacity is rented. Once a rent contract is created on a Node by a user, only this user can deploy node contracts on this specific node. A discount of 50% is given if a the user wishes to rent the full capacity of a node by creating a rent contract. All node contracts deployed on a node where a user has a rent contract are free of use expect for the public ip's which can be added on a node contract.
## Name Contract
A name contract is a contract that specifies a unique name to be used on the grid's webgateways. Once a name contract is created, this name can be used as and entrypoint for an application on the grid.
## Contract billing
Every contract is billed every 1 hour on the chain, the amount that is due is deducted from the user's wallet every 24 hours or when the user cancels his contract. The total amount acrued in those 24 hours gets send to following destinations:
- 10% goes to the threefold foundation
- 5% goes to staking pool wallet (to be implemented in a later phase)
- 50% goes to certified sales channel
- 35% TFT gets burned
See [pricing](../../../knowledge_base/cloud/pricing/pricing.md) for more information on how the cost for a contract is calculated.
## Contract locking
To not overload the chain with transfer events and others we choose to lock the amount due for a contract every hour and after 24 hours unlock the amount and deduct it in one go. This lock is saved on a user's account, if the user has multiple contracts the locked amount will be stacked.
## Contract grace period
When the owner of a contract runs out funds on his wallet to pay for his deployment, the contract goes in to a Grace Period state. The deployment, whatever that might be, will be unaccessible during this period to the user. When the wallet is funded with TFT again, the contract goes back to a normal operating state. If the grace period runs out (by default 2 weeks) the user's deployment and data will be deleted from the node.
## DAO
See [DAO](../../dashboard/tfchain/tf_dao.md) for more information on the DAO on TF Chain.
## Farming Policies
See [farming_policies](farming_policies.md) for more information on the farming policies on TF Chain.
## Node Connection price
A connection price is set to every new Node that boots on the Grid. This connection price influences the amount of TFT farmed in a period. The connection price set on a node is permanent. The DAO can propose the increase / decrease of the connection price. At the time of writing the connection price is set to $ 0.08. When the DAO proposes a connection price and the vote is passed, new nodes will attach to the new connection price.
## Node Certifiers
Node certifiers are entities who are allowed to set a node's certification level to `Certified`. The DAO can propose to add / remove entities that can certify nodes. This is usefull for allowing approved resellers of Threefold nodes to mark nodes as Certified. A certified node farms 25% more tokens than `Diy` a node.

View File

@ -0,0 +1,94 @@
<h1> Farming Policies </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Farming Policy Fields](#farming-policy-fields)
- [Limits on linked policy](#limits-on-linked-policy)
- [Creating a Policy](#creating-a-policy)
- [Linking a policy to a Farm](#linking-a-policy-to-a-farm)
***
## Introduction
A farming policy defines how farming rewards are handed out for nodes. Every node has a farming policy attached. A farming policy is either linked to a farm, in which case new nodes are given the farming policy of the farm they are in once they register themselves. Alternatively a farming policy can be a "default". These are not attached to a farm, but instead they are used for nodes registered in farms which don't have a farming policy. Multiple defaults can exist at the same time, and the most fitting should be chosen.
## Farming Policy Fields
A farming policy has the following fields:
- id (used to link policies)
- name
- Default. This indicates if the policy can be used by any new node (if the parent farm does not have a dedicated attached policy). Essentially, a `Default` policy serves as a base which can be overriden per farm by linking a non default policy to said farm.
- Reward tft per CU, SU and NU, IPV4
- Minimal uptime needed in integer format (example 995)
- Policy end (After this block number the policy can not be linked to new farms any more)
- If this policy is immutable or not. Immutable policies can never be changed again
Additionally, we also use the following fields, though those are only useful for `Default` farming policies:
- Node needs to be certified
- Farm needs to be certified (with certification level, which will be changed to an enum).
In case a farming policy is not attached to a farm, new nodes will pick the most appropriate farming policy from the default ones. To decide which one to pick, they should be considered in order with most restrictive first until one matches. That means:
- First check for the policy with highest farming certification (in the current case gold) and certified nodes
- Then check for a policy with highest farming certification (in the current case gold) and non certified nodes
- Check for policy without farming certification but certified nodes
- Last check for a policy without any kind of certification
Important here is that certification of a node only happens after it comes live for the first time. As such, when a node gets certified, farming certification needs to be re evaluated, but only if the currently attached farming policy on the node is a `Default` policy (as specifically linked policies have priority over default ones). When evaluating again, we first consider if we are eligible for the farming policy linked to the farm, if any.
## Limits on linked policy
When a council member attaches a policy to a farm, limits can be set. These limits define how much a policy can be used for nodes, before it becomes unusable and gets removed. The limits currently are:
- Farming Policy ID: the ID of the farming policy which we want to limit to a farm.
- CU. Every time a node is added in the farm, it's CU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of CU that can be attached to this policy is reached.
- SU. Every time a node is added in the farm, it's SU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of SU that can be attached to this policy is reached.
- End date. After this date the policy is not effective anymore and can't be used. It is removed from the farm and a default policy is used.
- Certification. If set, only certified nodes can get this policy. Non certified nodes get a default policy.
Once a limit is reached, the farming policy is removed from the farm, so new nodes will get one of the default policies until a new policy is attached to the farm.
## Creating a Policy
A council member can create a Farming Policy (DAO) in the following way:
1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics`
2: Now select the account to propose from (should be an account that's a council member).
3: Select as action `dao` -> `propose`
5: Set a `threshold` (amount of farmers to vote)
6: Select an actions `tfgridModule` -> `createFarmingPolicy` and fill in all the fields.
7: Create a forum post with the details of the farming policy and fill in the link of that post in the `link` field
8: Give it some good `description`.
9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`.
10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate).
All (su, cu, nu, ipv4) values should be expressed in units USD. Minimal uptime should be expressed as integer that represents an percentage (example: `95`).
Policy end is optional (0 or some block number in the future). This is used for expiration.
For reference:
![image](./img/create_policy.png)
## Linking a policy to a Farm
First identify the policy ID to link to a farm. You can check for farming policies in [chainstate](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate) -> `tfgridModule` -> `farmingPolciesMap`, start with ID 1 and increment with 1 until you find the farming policy which was created when the proposal was expired and closed.
1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics`
2: Now select the account to propose from (should be an account that's a council member).
3: Select as proposal `dao` -> `propose`
4: Set a `threshold` (amount of farmers to vote)
5: Select an actions `tfgridModule` -> `attachPolicyToFarm` and fill in all the fields (FarmID and Limits).
6: Limits contains a `farming_policy_id` (Required) and cu, su, end, node count (which are all optional). It also contains `node_certification`, if this is set to true only certified nodes can have this policy.
7: Create a forum post with the details of why we want to link that farm to that policy and fill in the link of that post in the `link` field
8: Give it some good `description`.
9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`.
10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate).
For reference:
![image](./img/attach.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

View File

@ -0,0 +1,57 @@
<h1>ThreeFold Chain</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deployed instances](#deployed-instances)
- [Create a TFChain twin](#create-a-tfchain-twin)
- [Get your twin ID](#get-your-twin-id)
***
## Introduction
ThreeFold blockchain (aka TFChain) serves as a registry for Nodes, Farms, Digital Twins and Smart Contracts.
It is the backbone of [ZOS](https://github.com/threefoldtech/zos) and other components.
## Deployed instances
- Development network (Devnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer)
- Websocket url: `wss://tfchain.dev.grid.tf`
- GraphQL UI: [https://graphql.dev.grid.tf/graphql](https://graphql.dev.grid.tf/graphql)
- QA testing network (QAnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer)
- Websocket url: `wss://tfchain.qa.grid.tf`
- GraphQL UI: [https://graphql.qa.grid.tf/graphql](https://graphql.qa.grid.tf/graphql)
- Test network (Testnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer)
- Websocket url: `wss://tfchain.test.grid.tf`
- GraphQL UI: [https://graphql.test.grid.tf/graphql](https://graphql.test.grid.tf/graphql)
- Production network (Mainnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer)
- Websocket url: `wss://tfchain.grid.tf`
- GraphQL UI: [https://graphql.grid.tf/graphql](https://graphql.grid.tf/graphql)
## Create a TFChain twin
A twin is a unique identifier linked to a specific account on a given TFChain network.
Actually there are 2 ways to create a twin:
- With the [Dashboard](../../dashboard/wallet_connector.md)
- a twin is automatically generated while creating a TFChain account
- With the TFConnect app
- a twin is automatically generated while creating a farm (in this case the twin will be created on mainnet)
## Get your twin ID
One can retrieve the twin ID associated to his account going to `Developer` -> `Chain state` -> `tfgridModule` -> `twinIdByAccountID()`.
![service_contract_twin_from_account](img/service_contract_twin_from_account.png)

View File

@ -0,0 +1,142 @@
<h1>External Service Contract: How to set and execute</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Step 1: Create the contract and get its unique ID](#step-1-create-contract--get-unique-id)
- [Step 2: Fill contract](#step-2-fill-contract)
- [Step 3: Both parties approve contract](#step-3-both-parties-approve-contract)
- [Step 4: Bill for the service](#step-4-bill-for-the-service)
- [Step 5: Cancel the contract](#step-5-cancel-the-contract)
***
# Introduction
It is now possible to create a generic contract between two TFChain users (without restriction of account type) for some external service and bill for it.
The initial scenario is when two parties, a service provider and a consumer of the service, want to use TFChain to automatically handle the billing/payment process for an agreement (in TFT) they want to make for a service which is external from the grid.
This is actually a more direct and generic feature if we compare to the initial rewarding model where a service provider (or solution provider) is receiving TFT from a rewards distribution process, linked to a node contract and based on a cloud capacity consumption, which follows specific billing rules.
The initial requirements are:
- Both service and consumer need to have their respective twin created on TFChain (if not, see [here](tfchain.md#create-a-tfchain-twin) how to do it)
- Consumer account needs to be funded (lack of funds will simply result in the contract cancelation while billed)
In the following steps we detail the sequence of extrinsics that need to be called in TFChain Polkadot portal for setting up and executing such contract.
<!-- We also show how to check if everything is going the right way via the TFChain GraphQL interface. -->
Make sure to use right [links](tfchain.md#deployed-instances) depending on the targeted network.
# Step 1: Create contract / Get unique ID
## Create service contract
The contract creation can be initiated by both service or consumer.
In TFChain Polkadot portal, the one who iniciates the contract should go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCreate()`, using the account he pretends to use in the contract, and select the corresponding service and consumer accounts before submiting the transaction.
![service_contract_create](img/service_contract_create.png)
Once executed the service contract is `Created` between the two parties and a unique ID is generated.
## Last service contract ID
To get the last generated service contract ID go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContractID()`.
![service_contract_id](img/service_contract_id.png)
## Parse service contract
To get the corresponding contract details, go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContracts()` and provide the contract ID.
You should see the following details:
![service_contract_state](img/service_contract_state.png)
Check if the contract fields are correct, especially the twin ID of both service and consumer, to be sure you get the right contract ID, referenced as `serviceContractId`.
## Wrong contract ID ?
If twin IDs are wrong ([how to get my twin ID?](tfchain.md#get-your-twin-id)) on service contract fields it means the contract does not correspond to the last created contract.
In this case parse the last contracts on stack by decreasing `serviceContractId` and try to identify the right one; or the contract was simply not created so you should repeat the creation process and evaluate the error log.
# Step 2: Fill contract
Once created, the service contract must be filled with its relative `per hour` fees:
- `baseFee` is the constant "per hour" price (in TFT) for the service.
- `variableFee` is the maximum "per hour" amount (in TFT) that can be billed extra.
To provide these values (only service can set fees), go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetFees()` specifying `serviceContractId`.
![service_contract_set_fees](img/service_contract_set_fees.png)
Some metadata (the description of the service for example) must be filled in a similar way (`Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetMetadata()`).
In this case service or consumer can set metadata.
![service_contract_set_metadata](img/service_contract_set_metadata.png)
The agreement will be automatically considered `Ready` when both metadata and fees are set (`metadata` not empty and `baseFee` greater than zero).
Note that whenever this condition is not reached both extrinsics can still be called to modify agreement.
You can check the contract status at each step of flow by parsing it as shown [here](#parse-service-contract).
# Step 3: Both parties approve contract
Now having the agreement ready the contract can be submited for approval.
To approve the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractApprove()` specifying `serviceContractId`.
![service_contract_approve](img/service_contract_approve.png)
To reject the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractReject()` specifying `serviceContractId`.
![service_contract_reject](img/service_contract_reject.png)
The contract needs to be explicitly `Approved` by both service and consumer to be ready for billing.
Before reaching this state, if one of the parties decides to call the rejection extrinsic, it will instantly lead to the cancelation of the contract (and its permanent removal).
# Step 4: Bill for the service
Once the contract is accepted by both it can be billed.
## Send bill to consumer
Only the service can bill the consumer going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractBill()` specifying `serviceContractId` and billing informations such as `variableAmount` and some `metadata`.
![service_contract_bill](img/service_contract_bill.png)
## Billing frequency
⚠️ Important: because a service should not charge the user if it doesn't work, it is required that bills be send in less than 1 hour intervals.
Any bigger interval will result in a bounded 1 hour bill (in other words, extra time will not be billed).
It is the service responsability to bill on right frequency!
## Amount due calculation
When the bill is received, the chain calculates the bill amount based on the agreement values as follows:
~~~
amount = baseFee * T / 3600 + variableAmount
~~~
where `T` is the elapsed time, in seconds and bounded by 3600 (see [above](#billing-frequency)), since last effective billing operation occured.
## Protection against draining
Note that if `variableAmount` is too high (i.e `variableAmount > variableFee * T / 3600`) the billing extrinsic will fail.
The `variableFee` value in the contract is interpreted as being "per hour" and acts as a protection mechanism to avoid consumer draining.
Indeed, as it is technically possible for the service to send a bill every second, there would be no gain for that (unless overloading the chain uselessly).
So it is also the service responsability to set a suitable `variableAmount` according to the billing frequency!
## Billing considerations
Then, if all goes well and no error is dispatched after submitting the transaction, the consumer pays for the due amount calculated from the bill (see calculation detail [above](#amount-due-calculation)).
In practice the amount is transferred from the consumer twin account to the service twin account.
Be aware that if the consumer is out of funds the billing will fail AND the contract will automatically be canceled.
# Step 5: Cancel the contract
At every moment of the flow since the contract is created it can be canceled (and definitively removed).
Only the service or the consumer can do it going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCancel()` specifying `serviceContractId`.
![service_contract_cancel](img/service_contract_cancel.png)

View File

@ -0,0 +1,81 @@
<h1>Solution Provider</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Changes to Contract Creation](#changes-to-contract-creation)
- [Creating a Provider](#creating-a-provider)
- [Council needs to approve a provider before it can be used](#council-needs-to-approve-a-provider-before-it-can-be-used)
***
## Introduction
> Note: While the solution provider program is still active, the plan is to discontinue the program in the near future. We will update the manual as we get more information. We currently do not accept new solution providers.
A "solution" is something running on the grid, created by a community member. This can be brought forward to the council, who can vote on it to recognize it as a solution. On contract creation, a recognized solution can be referenced, in which case part of the payment goes toward the address coupled to the solution. On chain a solution looks as follows:
- Description (should be some text, limited in length. Limit should be rather low, if a longer one is desired a link can be inserted. 160 characters should be enough imo).
- Up to 5 payout addresses, each with a payout percentage. This is the percentage of the payout received by the associated address. The amount is deducted from the payout to the treasury and specified as percentage of the total contract cost. As such, the sum of these percentages can never exceed 50%. If this value is not 50%, the remainder is payed to the treasure. Example: 10% payout percentage to addr 1, 5% payout to addr 2. This means 15% goes to the 2 listed addresses combined and 35% goes to the treasury (instead of usual 50). Rest remains as is. If the cost would be 10TFT, 1TFT goes to the address1, 0.5TFT goes to address 2, 3.5TFT goes to the treasury, instead of the default 5TFT to the treasury
- A unique code. This code is used to link a solution to the contract (numeric ID).
This means contracts need to carry an optional solution code. If the code is not specified (default), the 50% goes entirely to the treasury (as is always the case today).
A solution can be created by calling the extrinsic `smartContractModule` -> `createSolutionProvider` with parameters:
- description
- link (to website)
- list of providers
Provider:
- who (account id)
- take (amount of take this account should get) specified as an integer of max 50. example: 25
A forum post should be created with the details of the created solution provider, the dao can vote to approve this or not. If the solution provider get's approved, it can be referenced on contract creation.
Note that a solution can be deleted. In this case, existing contracts should fall back to the default behavior (i.e. if code not found -> default).
## Changes to Contract Creation
When creating a contract, a `solution_provider_id` can be passed. An error will be returned if an invalid or non-approved solution provider id is passed.
## Creating a Provider
Creating a provider is as easy as going to the [polkadotJS UI](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) (Currently only on devnet)
Select module `SmartContractModule` -> `createSolutionProvider(..)`
Fill in all the details, you can specify up to 5 target accounts which can have a take of the TFT generated from being a provider. Up to a total maximum of 50%. `Take` should be specified as a integer, example (`25`).
Once this object is created, a forum post should be created here: <https://forum.threefold.io/>
![create](./img/create_provider.png)
## Council needs to approve a provider before it can be used
First propose the solution to be approved:
![propose_approve](./img/propose_approve.png)
After submission it should like like this:
![proposed_approved](./img/proposed_approve.png)
Now another member of the council needs to vote:
![vote](./img/vote_proposal.png)
After enough votes are reached, it can be closed:
![close](./img/close_proposal.png)
If the close was executed without error the solution should be approved and ready to be used
Query the solution: `chainstate` -> `SmartContractModule` -> `solutionProviders`
![query](./img/query_provider.png)
Now the solution provider can be referenced on contract creation:
![create](./img/create_contract.png)

View File

@ -0,0 +1,15 @@
<h1>TFCMD</h1>
TFCMD (`tfcmd`) is a command line interface to interact and develop on Threefold Grid using command line.
Consult the [ThreeFoldTech TFCMD repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-cli) for the latest updates. Make sure to read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md).
<h2>Table of Contents</h2>
- [Getting Started](./tfcmd_basics.md)
- [Deploy a VM](./tfcmd_vm.md)
- [Deploy Kubernetes](./tfcmd_kubernetes.md)
- [Deploy ZDB](./tfcmd_zdbs.md)
- [Gateway FQDN](./tfcmd_gateway_fqdn.md)
- [Gateway Name](./tfcmd_gateway_name.md)
- [Contracts](./tfcmd_contracts.md)

View File

@ -0,0 +1,67 @@
<h1>TFCMD Getting Started</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Installation](#installation)
- [Login](#login)
- [Commands](#commands)
- [Using TFCMD](#using-tfcmd)
***
## Introduction
This section covers the basics on how to set up and use TFCMD (`tfcmd`).
TFCMD is available as binaries. Make sure to download the latest release and to stay up to date with new releases.
## Installation
An easy way to use TFCMD is to download and extract the TFCMD binaries to your path.
- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases)
- ```
wget <binaries_url>
```
- Extract the binaries
- ```
tar -xvf <binaries_file>
```
- Move `tfcmd` to any `$PATH` directory:
```bash
mv tfcmd /usr/local/bin
```
## Login
Before interacting with Threefold Grid with `tfcmd` you should login with your mnemonics and specify the grid network:
```console
$ tfcmd login
Please enter your mnemonics: <mnemonics>
Please enter grid network (main,test): <grid-network>
```
This validates your mnemonics and store your mnemonics and network to your default configuration dir.
Check [UserConfigDir()](https://pkg.go.dev/os#UserConfigDir) for your default configuration directory.
## Commands
You can run the command `tfcmd help` at any time to access the help section. This will also display the available commands.
| Command | Description |
| ---------- | ---------------------------------------------------------- |
| cancel | Cancel resources on Threefold grid |
| completion | Generate the autocompletion script for the specified shell |
| deploy | Deploy resources to Threefold grid |
| get | Get a deployed resource from Threefold grid |
| help | Help about any command |
| login | Login with mnemonics to a grid network |
| version | Get latest build tag |
## Using TFCMD
Once you've logged in, you can use commands to deploy workloads on the TFGrid. Read the next sections for more information on different types of workloads available with TFCMD.

View File

@ -0,0 +1,99 @@
<h1>Contracts</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Get](#get)
- [Get Contracts](#get-contracts)
- [Get Contract](#get-contract)
- [Cancel](#cancel)
- [Optional Flags](#optional-flags)
***
## Introduction
We explain how to handle contracts on the TFGrid with `tfcmd`.
## Get
### Get Contracts
Get all contracts
```bash
tfcmd get contracts
```
Example:
```console
$ tfcmd get contracts
5:13PM INF starting peer session=tf-1184566 twin=81
Node contracts:
ID Node ID Type Name Project Name
50977 21 network vm1network vm1
50978 21 vm vm1 vm1
50980 14 Gateway Name gatewaytest gatewaytest
Name contracts:
ID Name
50979 gatewaytest
```
### Get Contract
Get specific contract
```bash
tfcmd get contract <contract-id>
```
Example:
```console
$ tfcmd get contract 50977
5:14PM INF starting peer session=tf-1185180 twin=81
5:14PM INF contract:
{
"contract_id": 50977,
"twin_id": 81,
"state": "Created",
"created_at": 1702480020,
"type": "node",
"details": {
"nodeId": 21,
"deployment_data": "{\"type\":\"network\",\"name\":\"vm1network\",\"projectName\":\"vm1\"}",
"deployment_hash": "21adc91ef6cdc915d5580b3f12732ac9",
"number_of_public_ips": 0
}
}
```
## Cancel
Cancel specified contracts or all contracts.
```bash
tfcmd cancel contracts <contract-id>... [Flags]
```
Example:
```console
$ tfcmd cancel contracts 50856 50857
5:17PM INF starting peer session=tf-1185964 twin=81
5:17PM INF contracts canceled successfully
```
### Optional Flags
- all: cancel all twin's contracts.
Example:
```console
$ tfcmd cancel contracts --all
5:17PM INF starting peer session=tf-1185964 twin=81
5:17PM INF contracts canceled successfully
```

View File

@ -0,0 +1,87 @@
<h1>Gateway FQDN</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
We explain how to use gateway fully qualified domain names on the TFGrid using `tfcmd`.
## Deploy
```bash
tfcmd deploy gateway fqdn [flags]
```
### Required Flags
- name: name for the gateway deployment also used for canceling the deployment. must be unique.
- node: node id to deploy gateway on.
- backends: list of backends the gateway will forward requests to.
- fqdn: FQDN pointing to the specified node.
### Optional Flags
-tls: add TLS passthrough option (default false).
Example:
```console
$ tfcmd deploy gateway fqdn -n gatewaytest --node 14 --backends http://93.184.216.34:80 --fqdn example.com
3:34PM INF deploying gateway fqdn
3:34PM INF gateway fqdn deployed
```
## Get
```bash
tfcmd get gateway fqdn <gateway>
```
gateway is the name used when deploying gateway-fqdn using tfcmd.
Example:
```console
$ tfcmd get gateway fqdn gatewaytest
2:05PM INF gateway fqdn:
{
"NodeID": 14,
"Backends": [
"http://93.184.216.34:80"
],
"FQDN": "awady.gridtesting.xyz",
"Name": "gatewaytest",
"TLSPassthrough": false,
"Description": "",
"NodeDeploymentID": {
"14": 19653
},
"SolutionType": "gatewaytest",
"ContractID": 19653
}
```
## Cancel
```bash
tfcmd cancel <deployment-name>
```
deployment-name is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel gatewaytest
3:37PM INF canceling contracts for project gatewaytest
3:37PM INF gatewaytest canceled
```

View File

@ -0,0 +1,88 @@
<h1>Gateway Name</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
We explain how to use gateway names on the TFGrid using `tfcmd`.
## Deploy
```bash
tfcmd deploy gateway name [flags]
```
### Required Flags
- name: name for the gateway deployment also used for canceling the deployment. must be unique.
- backends: list of backends the gateway will forward requests to.
### Optional Flags
- node: node id gateway should be deployed on.
- farm: farm id gateway should be deployed on, if set choose available node from farm that fits vm specs (default 1). note: node and farm flags cannot be set both.
-tls: add TLS passthrough option (default false).
Example:
```console
$ tfcmd deploy gateway name -n gatewaytest --node 14 --backends http://93.184.216.34:80
3:34PM INF deploying gateway name
3:34PM INF fqdn: gatewaytest.gent01.dev.grid.tf
```
## Get
```bash
tfcmd get gateway name <gateway>
```
gateway is the name used when deploying gateway-name using tfcmd.
Example:
```console
$ tfcmd get gateway name gatewaytest
1:56PM INF gateway name:
{
"NodeID": 14,
"Name": "gatewaytest",
"Backends": [
"http://93.184.216.34:80"
],
"TLSPassthrough": false,
"Description": "",
"SolutionType": "gatewaytest",
"NodeDeploymentID": {
"14": 19644
},
"FQDN": "gatewaytest.gent01.dev.grid.tf",
"NameContractID": 19643,
"ContractID": 19644
}
```
## Cancel
```bash
tfcmd cancel <deployment-name>
```
deployment-name is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel gatewaytest
3:37PM INF canceling contracts for project gatewaytest
3:37PM INF gatewaytest canceled
```

View File

@ -0,0 +1,147 @@
<h1>Kubernetes</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
In this section, we explain how to deploy Kubernetes workloads on the TFGrid using `tfcmd`.
## Deploy
```bash
tfcmd deploy kubernetes [flags]
```
### Required Flags
- name: name for the master node deployment also used for canceling the cluster deployment. must be unique.
- ssh: path to public ssh key to set in the master node.
### Optional Flags
- master-node: node id master should be deployed on.
- master-farm: farm id master should be deployed on, if set choose available node from farm that fits master specs (default 1). note: master-node and master-farm flags cannot be set both.
- workers-node: node id workers should be deployed on.
- workers-farm: farm id workers should be deployed on, if set choose available node from farm that fits master specs (default 1). note: workers-node and workers-farm flags cannot be set both.
- ipv4: assign public ipv4 for master node (default false).
- ipv6: assign public ipv6 for master node (default false).
- ygg: assign yggdrasil ip for master node (default true).
- master-cpu: number of cpu units for master node (default 1).
- master-memory: master node memory size in GB (default 1).
- master-disk: master node disk size in GB (default 2).
- workers-number: number of workers nodes (default 0).
- workers-ipv4: assign public ipv4 for each worker node (default false)
- workers-ipv6: assign public ipv6 for each worker node (default false)
- workers-ygg: assign yggdrasil ip for each worker node (default true)
- workers-cpu: number of cpu units for each worker node (default 1).
- workers-memory: memory size for each worker node in GB (default 1).
- workers-disk: disk size in GB for each worker node (default 2).
Example:
```console
$ tfcmd deploy kubernetes -n kube --ssh ~/.ssh/id_rsa.pub --master-node 14 --workers-number 2 --workers-node 14
4:21PM INF deploying network
4:22PM INF deploying cluster
4:22PM INF master yggdrasil ip: 300:e9c4:9048:57cf:504f:c86c:9014:d02d
```
## Get
```bash
tfcmd get kubernetes <kubernetes>
```
kubernetes is the name used when deploying kubernetes cluster using tfcmd.
Example:
```console
$ tfcmd get kubernetes examplevm
3:14PM INF k8s cluster:
{
"Master": {
"Name": "kube",
"Node": 14,
"DiskSize": 2,
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
"FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723",
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "300:e9c4:9048:57cf:e8a0:662b:4e66:8faa",
"IP": "10.20.2.2",
"CPU": 1,
"Memory": 1024
},
"Workers": [
{
"Name": "worker1",
"Node": 14,
"DiskSize": 2,
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
"FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723",
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "300:e9c4:9048:57cf:66d0:3ee4:294e:d134",
"IP": "10.20.2.2",
"CPU": 1,
"Memory": 1024
},
{
"Name": "worker0",
"Node": 14,
"DiskSize": 2,
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
"FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723",
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "300:e9c4:9048:57cf:1ae5:cc51:3ffc:81e",
"IP": "10.20.2.2",
"CPU": 1,
"Memory": 1024
}
],
"Token": "",
"NetworkName": "",
"SolutionType": "kube",
"SSHKey": "",
"NodesIPRange": null,
"NodeDeploymentID": {
"14": 22743
}
}
```
## Cancel
```bash
tfcmd cancel <deployment-name>
```
deployment-name is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel kube
3:37PM INF canceling contracts for project kube
3:37PM INF kube canceled
```

View File

@ -0,0 +1,171 @@
<h1>Deploy a VM</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Flags](#flags)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Examples](#examples)
- [Deploy a VM without GPU](#deploy-a-vm-without-gpu)
- [Deploy a VM with GPU](#deploy-a-vm-with-gpu)
- [Get](#get)
- [Get Example](#get-example)
- [Cancel](#cancel)
- [Cancel Example](#cancel-example)
- [Questions and Feedback](#questions-and-feedback)
***
# Introduction
In this section, we explore how to deploy a virtual machine (VM) on the ThreeFold Grid using `tfcmd`.
# Deploy
You can deploy a VM with `tfcmd` using the following template accompanied by required and optional flags:
```bash
tfcmd deploy vm [flags]
```
## Flags
When you use `tfcmd`, there are two required flags (`name` and `ssh`), while the other remaining flags are optional. Using such optional flags can be used to deploy a VM with a GPU for example or to set an IPv6 address and much more.
### Required Flags
- **name**: name for the VM deployment also used for canceling the deployment. The name must be unique.
- **ssh**: path to public ssh key to set in the VM.
### Optional Flags
- **node**: node ID the VM should be deployed on.
- **farm**: farm ID the VM should be deployed on, if set choose available node from farm that fits vm specs (default `1`). Note: node and farm flags cannot both be set.
- **cpu**: number of cpu units (default `1`).
- **disk**: size of disk in GB mounted on `/data`. If not set, no disk workload is made.
- **entrypoint**: entrypoint for the VM FList (default `/sbin/zinit init`). Note: setting this without the flist option will fail.
- **flist**: FList used in the VM (default `https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist`). Note: setting this without the entrypoint option will fail.
- **ipv4**: assign public ipv4 for the VM (default `false`).
- **ipv6**: assign public ipv6 for the VM (default `false`).
- **memory**: memory size in GB (default `1`).
- **rootfs**: root filesystem size in GB (default `2`).
- **ygg**: assign yggdrasil ip for the VM (default `true`).
- **gpus**: assign a list of gpus' IDs to the VM. Note: setting this without the node option will fail.
## Examples
We present simple examples on how to deploy a virtual machine with or without a GPU using `tfcmd`.
### Deploy a VM without GPU
```console
$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10
12:06PM INF deploying network
12:06PM INF deploying vm
12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821
```
### Deploy a VM with GPU
```console
$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10 --gpus '0000:0e:00.0/1882/543f' --gpus '0000:0e:00.0/1887/593f' --node 12
12:06PM INF deploying network
12:06PM INF deploying vm
12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821
```
# Get
To get the VM, use the following template:
```bash
tfcmd get vm <vm>
```
Make sure to replace `<vm>` with the name of the VM specified using `tfcmd`.
## Get Example
In the following example, the name of the deployment to get is `examplevm`.
```console
$ tfcmd get vm examplevm
3:20PM INF vm:
{
"Name": "examplevm",
"NodeID": 15,
"SolutionType": "examplevm",
"SolutionProvider": null,
"NetworkName": "examplevmnetwork",
"Disks": [
{
"Name": "examplevmdisk",
"SizeGB": 10,
"Description": ""
}
],
"Zdbs": [],
"Vms": [
{
"Name": "examplevm",
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist",
"FlistChecksum": "",
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Corex": false,
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "301:ad3a:9c52:98d1:cd05:1595:9abb:e2f1",
"IP": "10.20.2.2",
"Description": "",
"CPU": 2,
"Memory": 4096,
"RootfsSize": 2048,
"Entrypoint": "/sbin/zinit init",
"Mounts": [
{
"DiskName": "examplevmdisk",
"MountPoint": "/data"
}
],
"Zlogs": null,
"EnvVars": {
"SSH_KEY": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcGrS1RT36rHAGLK3/4FMazGXjIYgWVnZ4bCvxxg8KosEEbs/DeUKT2T2LYV91jUq3yibTWwK0nc6O+K5kdShV4qsQlPmIbdur6x2zWHPeaGXqejbbACEJcQMCj8szSbG8aKwH8Nbi8BNytgzJ20Ysaaj2QpjObCZ4Ncp+89pFahzDEIJx2HjXe6njbp6eCduoA+IE2H9vgwbIDVMQz6y/TzjdQjgbMOJRTlP+CzfbDBb6Ux+ed8F184bMPwkFrpHs9MSfQVbqfIz8wuq/wjewcnb3wK9dmIot6CxV2f2xuOZHgNQmVGratK8TyBnOd5x4oZKLIh3qM9Bi7r81xCkXyxAZbWYu3gGdvo3h85zeCPGK8OEPdYWMmIAIiANE42xPmY9HslPz8PAYq6v0WwdkBlDWrG3DD3GX6qTt9lbSHEgpUP2UOnqGL4O1+g5Rm9x16HWefZWMjJsP6OV70PnMjo9MPnH+yrBkXISw4CGEEXryTvupfaO5sL01mn+UOyE= abdulrahman@AElawady-PC\n"
},
"NetworkName": "examplevmnetwork"
}
],
"QSFS": [],
"NodeDeploymentID": {
"15": 22748
},
"ContractID": 22748
}
```
# Cancel
To cancel your VM deployment, use the following template:
```bash
tfcmd cancel <deployment-name>
```
Make sure to replace `<deployment-name>` with the name of the deployment specified using `tfcmd`.
## Cancel Example
In the following example, the name of the deployment to cancel is `examplevm`.
```console
$ tfcmd cancel examplevm
3:37PM INF canceling contracts for project examplevm
3:37PM INF examplevm canceled
```
# Questions and Feedback
If you have any questions or feedback, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@ -0,0 +1,125 @@
<h1>ZDBs</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
In this section, we explore how to use ZDBs related commands using `tfcmd` to interact with the TFGrid.
## Deploy
```bash
tfcmd deploy zdb [flags]
```
### Required Flags
- project_name: project name for the ZDBs deployment also used for canceling the deployment. must be unique.
- size: HDD of zdb in GB.
### Optional Flags
- node: node id zdbs should be deployed on.
- farm: farm id zdbs should be deployed on, if set choose available node from farm that fits zdbs deployment specs (default 1). note: node and farm flags cannot be set both.
- count: count of zdbs to be deployed (default 1).
- names: a slice of names for the number of ZDBs.
- password: password for ZDBs deployed
- description: description for your ZDBs, it's optional.
- mode: the enumeration of the modes 0-db can operate in (default user).
- public: if zdb gets a public ip6 (default false).
Example:
- Deploying ZDBs
```console
$ tfcmd deploy zdb --project_name examplezdb --size=10 --count=2 --password=password
12:06PM INF deploying zdbs
12:06PM INF zdb 'examplezdb0' is deployed
12:06PM INF zdb 'examplezdb1' is deployed
```
## Get
```bash
tfcmd get zdb <zdb-project-name>
```
`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd get zdb examplezdb
3:20PM INF zdb:
{
"Name": "examplezdb",
"NodeID": 11,
"SolutionType": "examplezdb",
"SolutionProvider": null,
"NetworkName": "",
"Disks": [],
"Zdbs": [
{
"name": "examplezdb1",
"password": "password",
"public": false,
"size": 10,
"description": "",
"mode": "user",
"ips": [
"2a10:b600:1:0:c4be:94ff:feb1:8b3f",
"302:9e63:7d43:b742:469d:3ec2:ab15:f75e"
],
"port": 9900,
"namespace": "81-36155-examplezdb1"
},
{
"name": "examplezdb0",
"password": "password",
"public": false,
"size": 10,
"description": "",
"mode": "user",
"ips": [
"2a10:b600:1:0:c4be:94ff:feb1:8b3f",
"302:9e63:7d43:b742:469d:3ec2:ab15:f75e"
],
"port": 9900,
"namespace": "81-36155-examplezdb0"
}
],
"Vms": [],
"QSFS": [],
"NodeDeploymentID": {
"11": 36155
},
"ContractID": 36155,
"IPrange": ""
}
```
## Cancel
```bash
tfcmd cancel <zdb-project-name>
```
`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel examplezdb
3:37PM INF canceling contracts for project examplezdb
3:37PM INF examplezdb canceled
```

View File

@ -0,0 +1,13 @@
<h1>TFROBOT</h1>
TFROBOT (`tfrobot`) is a command line interface tool that offers simultaneous mass deployment of groups of VMs on the ThreeFold Grid, with support of multiple retries for failed deployments, and customizable configurations, where you can define node groups, VMs groups and other configurations through a YAML or a JSON file.
Consult the [ThreeFoldTech TFROBOT repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/tfrobot) for the latest updates and read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) to get up to speed if needed.
<h2>Table of Contents</h2>
- [Installation](./tfrobot_installation.md)
- [Configuration File](./tfrobot_config.md)
- [Deployment](./tfrobot_deploy.md)
- [Commands and Flags](./tfrobot_commands_flags.md)
- [Supported Configurations](./tfrobot_configurations.md)

View File

@ -0,0 +1,57 @@
<h1> Commands and Flags </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Commands](#commands)
- [Subcommands](#subcommands)
- [Flags](#flags)
***
## Introduction
We present the various commands, subcommands and flags available with TFROBOT.
## Commands
You can run the command `tfrobot help` at any time to access the help section. This will also display the available commands.
| Command | Description |
| ---------- | ---------------------------------------------------------- |
| completion | Generate the autocompletion script for the specified shell |
| help | Help about any command |
| version | Get latest build tag |
Use `tfrobot [command] --help` for more information about a command.
## Subcommands
You can use subcommands to deploy and cancel workloads on the TFGrid.
- **deploy:** used to mass deploy groups of vms with specific configurations
```bash
tfrobot deploy -c path/to/your/config.yaml
```
- **cancel:** used to cancel all vms deployed using specific configurations
```bash
tfrobot cancel -c path/to/your/config.yaml
```
- **load:** used to load all vms deployed using specific configurations
```bash
tfrobot load -c path/to/your/config.yaml
```
## Flags
You can use different flags to configure your deployment.
| Flag | Usage |
| :---: | :---: |
| -c | used to specify path to configuration file |
| -o | used to specify path to output file to store the output info in |
| -d | allow debug logs to appear in the output logs |
| -h | help |
> **Note:** Make sure to use every flag once. If the flag is repeated, it will ignore all values and take the last value of the flag.`

View File

@ -0,0 +1,131 @@
<h1> Configuration File</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Examples](#examples)
- [YAML Example](#yaml-example)
- [JSON Example](#json-example)
- [Create a Configuration File](#create-a-configuration-file)
***
## Introduction
To use TFROBOT, the user needs to create a YAML or a JSON configuration file that will contain the mass deployment information, such as the groups information, number of VMs to deploy how, the compute, storage and network resources needed, as well as the user's credentials, such as the SSH public key, the network (main, test, dev, qa) and the TFChain mnemonics.
## Examples
We present here a configuration file example that deploys 3 nodes with 2 vcores, 16GB of RAM, 100GB of SSD, 50GB of HDD and an IPv4 address. The same deployment is shown with a YAML file and with a JSON file. Parsing is based on file extension, TFROBOT will use JSON format if the file has a JSON extension and YAML format otherwise.
You can use this example for guidance, and make sure to replace placeholders and adapt the groups based on your actual project details. To the minimum, `ssh_key1` should be replaced by the user SSH public key and `example-mnemonic` should be replaced by the user mnemonics.
Note that if no IPs are specified as true (IPv4 or IPv6), an Yggdrasil IP address will automatically be assigned to the VM, as at least one IP should be set to allow an SSH connection to the VM.
### YAML Example
```
node_groups:
- name: group_a
nodes_count: 3
free_cpu: 2
free_mru: 16
free_ssd: 100
free_hdd: 50
dedicated: false
public_ip4: true
public_ip6: false
certified: false
region: europe
vms:
- name: examplevm123
vms_count: 5
node_group: group_a
cpu: 1
mem: 0.25
public_ip4: true
public_ip6: false
ssd:
- size: 15
mount_point: /mnt/ssd
flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
entry_point: /sbin/zinit init
root_size: 0
ssh_key: example1
env_vars:
user: user1
pwd: 1234
ssh_keys:
example1: ssh_key1
mnemonic: example-mnemonic
network: dev
max_retries: 5
```
### JSON Example
```
{
"node_groups": [
{
"name": "group_a",
"nodes_count": 3,
"free_cpu": 2,
"free_mru": 16,
"free_ssd": 100,
"free_hdd": 50,
"dedicated": false,
"public_ip4": true,
"public_ip6": false,
"certified": false,
"region": europe,
}
],
"vms": [
{
"name": "examplevm123",
"vms_count": 5,
"node_group": "group_a",
"cpu": 1,
"mem": 0.25,
"public_ip4": true,
"public_ip6": false,
"ssd": [
{
"size": 15,
"mount_point": "/mnt/ssd"
}
],
"flist": "https://hub.grid.tf/tf-official-apps/base:latest.flist",
"entry_point": "/sbin/zinit init",
"root_size": 0,
"ssh_key": "example1",
"env_vars": {
"user": "user1",
"pwd": "1234"
}
}
],
"ssh_keys": {
"example1": "ssh_key1"
},
"mnemonic": "example-mnemonic",
"network": "dev",
"max_retries": 5
}
```
## Create a Configuration File
You can start with the example above and adjust for your specific deployment needs.
- Create directory
```
mkdir tfrobot_deployments && cd $_
```
- Create configuration file and adjust with the provided example above
```
nano config.yaml
```
Once you've set your configuration file, all that's left is to deploy on the TFGrid. Read the next section for more information on how to deploy with TFROBOT.

View File

@ -0,0 +1,68 @@
<h1> Supported Configurations </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Config File](#config-file)
- [Node Group](#node-group)
- [Vms Groups](#vms-groups)
- [Disk](#disk)
***
## Introduction
When deploying with TFROBOT, you can set different configurations allowing for personalized deployments.
## Config File
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| [node_group](#node-group) | description of all resources needed for each node_group | list of structs of type node_group |
| [vms](#vms-groups) | description of resources needed for deploying groups of vms belong to node_group | list of structs of type vms |
| ssh_keys | map of ssh keys with key=name and value=the actual ssh key | map of string to string |
| mnemonic | mnemonic of the user | should be valid mnemonic |
| network | valid network of ThreeFold Grid networks | main, test, qa, dev |
| max_retries | times of retries of failed node groups | positive integer |
## Node Group
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| name | name of node_group | node group name should be unique |
| nodes_count | number of nodes in node group| nonzero positive integer |
| free_cpu | number of cpu of node | nonzero positive integer max = 32 |
| free_mru | free memory in the node in GB | min = 0.25, max = 256 |
| free_ssd | free ssd storage in the node in GB | positive integer value |
| free_hdd | free hdd storage in the node in GB | positive integer value |
| dedicated | are nodes dedicated | `true` or `false` |
| public_ip4 | should the nodes have free ip v4 | `true` or `false` |
| public_ip6 | should the nodes have free ip v6 | `true` or `false` |
| certified | should the nodes be certified(if false the nodes could be certified or DIY) | `true` or `false` |
| region | region could be the name of the continents the nodes are located in | africa, americas, antarctic, antarctic ocean, asia, europe, oceania, polar |
## Vms Groups
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| name | name of vm group | string value with no special characters |
| vms_count | number of vms in vm group| nonzero positive integer |
| node_group | name of node_group the vm belongs to | should be defined in node_groups |
| cpu | number of cpu for vm | nonzero positive integer max = 32 |
| mem | free memory in the vm in GB | min = 0.25, max 256 |
| planetary | should the vm have yggdrasil ip | `true` or `false` |
| public_ip4 | should the vm have free ip v4 | `true` or `false` |
| public_ip6 | should the vm have free ip v6 | `true` or `false` |
| flist | should be a link to valid flist | valid flist url with `.flist` or `.fl` extension |
| entry_point | entry point of the flist | path to the entry point in the flist |
| ssh_key | key of ssh key defined in the ssh_keys map | should be valid ssh_key defined in the ssh_keys map |
| env_vars | map of env vars | map of type string to string |
| ssd | list of disks | should be of type disk|
| root_size | root size in GB | 0 for default root size, max 10TB |
## Disk
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| size | disk size in GB| positive integer min = 15 |
| mount_point | disk mount point | path to mountpoint |

View File

@ -0,0 +1,59 @@
<h1> Deployment </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Deploy Workloads](#deploy-workloads)
- [Delete Workloads](#delete-workloads)
- [Logs](#logs)
- [Using TFCMD with TFROBOT](#using-tfcmd-with-tfrobot)
- [Get Contracts](#get-contracts)
***
## Introduction
We present how to deploy workloads on the ThreeFold Grid using TFROBOT.
## Prerequisites
To deploy workloads on the TFGrid with TFROBOT, you first need to [install TFROBOT](./tfrobot_installation.md) on your machine and create a [configuration file](./tfrobot_config.md).
## Deploy Workloads
Once you've installed TFROBOT and created a configuration file, you can deploy on the TFGrid with the following command. Make sure to indicate the path to your configuration file.
```bash
tfrobot deploy -c ./config.yaml
```
## Delete Workloads
To delete the contracts, you can use the following line. Make sure to indicate the path to your configuration file.
```bash
tfrobot cancel -c ./config.yaml
```
## Logs
To ensure a complete log history, append `2>&1 | tee path/to/log/file` to the command being executed.
```bash
tfrobot deploy -c ./config.yaml 2>&1 | tee path/to/log/file
```
## Using TFCMD with TFROBOT
### Get Contracts
The TFCMD tool works well with TFROBOT, as it can be used to query the TFGrid, for example you can see the contracts created by TFROBOT by running the TFCMD command, taking into consideration that you are using the same mnemonics and are on the same network:
```bash
tfcmd get contracts
```
For more information on TFCMD, [read the documentation](../tfcmd/tfcmd.md).

View File

@ -0,0 +1,36 @@
<h1>Installation</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Installation](#installation)
***
## Introduction
This section covers the basics on how to install TFROBOT (`tfrobot`).
TFROBOT is available as binaries. Make sure to download the latest release and to stay up to date with new releases.
## Installation
To install TFROBOT, simply download and extract the TFROBOT binaries to your path.
- Create a new directory for `tfgrid-sdk-go`
```
mkdir tfgrid-sdk-go
cd tfgrid-sdk-go
```
- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases)
- ```
wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v0.14.4/tfgrid-sdk-go_Linux_x86_64.tar.gz
```
- Extract the binaries
- ```
tar -xvf tfgrid-sdk-go_Linux_x86_64.tar.gz
```
- Move `tfrobot` to any `$PATH` directory:
```bash
mv tfrobot /usr/local/bin
```

View File

@ -0,0 +1,88 @@
<h1> 1. Create a Farm </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Create a TFChain Account](#create-a-tfchain-account)
- [Create a Farm](#create-a-farm)
- [Create a ThreeFold Connect Wallet](#create-a-threefold-connect-wallet)
- [Add a Stellar Address for Payout](#add-a-stellar-address-for-payout)
- [Farming Rewards Distribution](#farming-rewards-distribution)
- [More Information](#more-information)
***
## Introduction
We cover the basic steps to create a farm with the ThreeFold Dashboard. We also create a TFConnect app wallet to receive the farming rewards.
## Create a TFChain Account
We create a TFChain account using the ThreeFold Dashboard.
Go to the [ThreeFold Dashboard](https://dashboard.grid.tf/), click on **Create Account**, choose a password and click **Connect**.
![tfchain_create_account](./img/dashboard_tfchain_create_account.png)
Once your profile gets activated, you should find your Twin ID and Address generated under your Mnemonics for verification. Also, your Account Balance will be available at the top right corner under your profile name.
![tf_mnemonics](./img/dashboard_tf_mnemonics.png)
## Create a Farm
We create a farm using the dashboard.
In the left-side menu, select **Farms** -> **Your Farms**.
![your_farms](./img/dashboard_your_farms.png)
Click on **Create Farm**, choose a farm name and then click **Create**.
![create_farm](./img/dashboard_create_farm.png)
![farm_name](./img/dashboard_farm_name.png)
## Create a ThreeFold Connect Wallet
Your farming rewards should be sent to a Stellar wallet with a TFT trustline enabled. The simplest way to proceed is to create a TF Connect app wallet as the TFT trustline is enabled by default on this wallet. For more information on TF Connect, read [this section](../../threefold_token/storing_tft/tf_connect_app.md).
Let's create a TFConnect Wallet and take note of the wallet address. First, download the app.
This app is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en&gl=US) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885).
- Note that for Android phones, you need at minimum Android Nougat, the 8.0 software version.
- Note that for iOS phones, you need at minimum iOS 14.5. It will be soon available to iOS 13.
Open the app, click **SIGN UP**, choose a ThreeFold Connect Id, write your email address, take note of the seed phrase and choose a pin. Once this is done, you will have to verify your email address. Check your email inbox.
In the app menu, click on **Wallet** and then click on **Create Initial Wallet**.
To find your wallet address, click on the **circled i** icon at the bottom of the screen.
![dashboard_tfconnect_wallet_1](./img/dashboard_tfconnect_wallet_1.png)
Click on the button next to your Stellar address to copy the address.
![dashboard_tfconnect_wallet_2](./img/dashboard_tfconnect_wallet_2.png)
You will need the TF Connect wallet address for the next section.
> Note: Make sure to keep your TF Connect Id and seed phrase in a secure place offline. You will need these two components to recover your account if you lose access.
## Add a Stellar Address for Payout
In the **Your Farms** section of the dashboard, click on **Add/Edit Stellar Payout Address**.
![dashboard_walletaddress_1](./img/dashboard_walletaddress_1.png)
Paste your Stellar wallet address and click **Submit**.
![dashboard_walletaddress_2](./img/dashboard_walletaddress_2.png)
### Farming Rewards Distribution
Farming rewards will be sent to your farming wallet around the 8th of each month. This can vary depending on the situation. The minting is done automatically by code and verified by humans as a double check.
## More Information
For more information, such as setting IP addresses, you can consult the [Dashboard Farms section](../../dashboard/farms/farms.md).

View File

@ -0,0 +1,177 @@
<h1> 2. Create a Zero-OS Bootstrap Image </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Download the Zero-OS Bootstrap Image](#download-the-zero-os-bootstrap-image)
- [Burn the Zero-OS Bootstrap Image](#burn-the-zero-os-bootstrap-image)
- [CD/DVD BIOS](#cddvd-bios)
- [USB Key BIOS+UEFI](#usb-key-biosuefi)
- [BalenaEtcher (MAC, Linux, Windows)](#balenaetcher-mac-linux-windows)
- [CLI (Linux)](#cli-linux)
- [Rufus (Windows)](#rufus-windows)
- [Additional Information (Optional)](#additional-information-optional)
- [Expert Mode](#expert-mode)
- [Use a Specific Kernel](#use-a-specific-kernel)
- [Disable GPU](#disable-gpu)
- [Bootstrap Image URL](#bootstrap-image-url)
- [Zeros-OS Bootstrapping](#zeros-os-bootstrapping)
- [Zeros-OS Expert Bootstrap](#zeros-os-expert-bootstrap)
***
## Introduction
We will now learn how to create a Zero-OS bootstrap image in order to boot a DIY 3Node.
## Download the Zero-OS Bootstrap Image
Let's download the Zero-OS bootstrap image.
In the Farms section of the Dashboard, click on **Bootstrap Node Image**
![dashboard_bootstrap_farm](./img/dashboard_bootstrap_farm.png)
or use the direct link [https://v3.bootstrap.grid.tf](https://v3.bootstrap.grid.tf):
```
https://v3.bootstrap.grid.tf
```
![Farming_Create_Farm_21](./img/farming_createfarm_21.png)
This is the Zero-OS v3 Bootstrapping page.
![Farming_Create_Farm_22](./img/farming_createfarm_22.png)
Write your farm ID and choose production mode.
![Farming_Create_Farm_23](./img/farming_createfarm_23.png)
If your system is new, you might be able to run the bootstrap in UEFI mode.
![Farming_Create_Farm_24](./img/farming_createfarm_24.png)
For older systems, run the bootstrap in BIOS mode. For BIOS CD/DVD, choose **ISO**. For BIOS USB, choose **USB**
Download the bootstrap image. Next, we will burn the bootstrap image.
## Burn the Zero-OS Bootstrap Image
We show how to burn the Zero-OS bootstrap image. A quick and modern way is to burn the bootstrap image on a USB key.
### CD/DVD BIOS
For the BIOS **ISO** image, download the file and burn it on a DVD.
### USB Key BIOS+UEFI
There are many ways to burn the bootstrap image on a USB key. The easiest way that works for all operating systems is to use BalenaEtcher. We also provide other methods.
#### BalenaEtcher (MAC, Linux, Windows)
For **MAC**, **Linux** and **Windows**, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. This will work for the option **EFI IMG** for UEFI boot, and with the option **USB** for BIOS boot. Simply follow the steps presented to you and make sure you select the bootstrap image file you downloaded previously.
> Note: There are alternatives to BalenaEtcher (e.g. [usbimager](https://gitlab.com/bztsrc/usbimager/)).
**General Steps with BalenaEtcher:**
1. Download BalenaEtcher
2. Open BalenaEtcher
3. Select **Flash from file**
4. Find and select the bootstrap image (with your correct farm ID)
5. Select **Target** (your USB key)
6. Select **Flash**
That's it. Now you have a bootstrap image on Zero-OS as a bootable removable media device.
#### CLI (Linux)
For the BIOS **USB** and the UEFI **EFI IMG** images, you can do the following on Linux:
sudo dd status=progress if=FILELOCATION.ISO(or .IMG) of=/dev/sd*
Here the * is to indicate that you must adjust according to your disk. To see your disks, write lsblk in the command window. Make sure you select the proper disk!
*If you USB key is not new, make sure that you format it before burning the Zero-OS image.
#### Rufus (Windows)
For Windows, if you are using the "dd" able image, instead of writing command line, you can use the free USB flashing program called [Rufus](https://sourceforge.net/projects/rufus.mirror/) and it will automatically do this without needing to use the command line. Rufus also formats the boot media in the process.
## Additional Information (Optional)
We cover some additional information. Note that the following information is not needed for a basic farm setup.
### Expert Mode
You can use the [expert mode](https://v3.bootstrap.grid.tf/expert) to generate specific Zero-OS bootstrap images.
Along the basic options of the normal bootstrap mode, the expert mode allows farmers to add extra kernel arguments and decide which kernel to use from a vast list of Zero-OS kernels.
#### Use a Specific Kernel
You can use the expert mode to choose a specific kernel. Simply set the information you normally use and then select the proper kernel you need in the **Kernel** drop-down list.
![](./img/bootstrap_kernel_list.png)
#### Disable GPU
You can use the expert mode to disable GPU on your 3Node.
![](./img/bootstrap_disable-gpu.png)
In the expert mode of the Zero-OS Bootstrap generator, fill in the following information:
- Farmer ID
- Your current farm ID
- Network
- The network of your farm
- Extra kernel arguments
- ```
disable-gpu
```
- Kernel
- Leave the default kernel
- Format
- Choose a bootstrap image format
- Click on **Generate**
- Click on **Download**
### Bootstrap Image URL
In both normal and expert mode, you can use the generated URL to quickly download a Zero-OS bootstrap image based on your farm specific setup.
Using URLs can be a very quick and efficient way to create new bootstrap images once your familiar with the Zero-OS bootstrap URL template and some potential varations.
```
https://<grid_version>.bootstrap.grid.tf/<image_format>/<network>/<farm_ID>/<arg1>/<arg2>/.../<kernel>
```
Note that the arguments and the kernel are optional.
The following content will provide some examples.
#### Zeros-OS Bootstrapping
On the [main page](https://v3.bootstrap.grid.tf/), once you've written your farm ID and selected a network, you can copy the generated URL of any given image format.
For example, the following URL is a download link to an **EFI IMG** of the Zero-OS bootstrap image of farm 1 on the main TFGrid v3 network:
```
https://v3.bootstrap.grid.tf/uefimg/prod/1
```
#### Zeros-OS Expert Bootstrap
You can use the generated sublink at the **Generate step** of the expert mode to get a quick URL to download your bootstrap image.
- After setting the parameters and arguments, click on **Generate**
- Add the **Target** content to the following URL `https://v3.bootstrap.grid.tf`
- For example, the following URL sets an **ipxe** script of the Zero-OS bootstrap of farm 1 on the main TFGrid v3 network, with the **disable-gpu** function enabled as an extra kernel argument and a specific kernel:
- ```
https://v3.bootstrap.grid.tf/ipxe/test/1/disable-gpu/zero-os-development-zos-v3-generic-b8706d390d.efi
```

View File

@ -0,0 +1,188 @@
<h1> 3. Set the Hardware </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Hardware Requirements](#hardware-requirements)
- [3Node Requirements Summary](#3node-requirements-summary)
- [Bandwidth Requirements](#bandwidth-requirements)
- [Link to Share Farming Setup](#link-to-share-farming-setup)
- [Powering the 3Node](#powering-the-3node)
- [Surge Protector](#surge-protector)
- [Power Distribution Unit (PDU)](#power-distribution-unit-pdu)
- [Uninterrupted Power Supply (UPS)](#uninterrupted-power-supply-ups)
- [Generator](#generator)
- [Connecting the 3Node to the Internet](#connecting-the-3node-to-the-internet)
- [Z-OS and Switches](#z-os-and-switches)
- [Using Onboard Storage (3Node Servers)](#using-onboard-storage-3node-servers)
- [Upgrading a DIY 3Node](#upgrading-a-diy-3node)
***
## Introduction
In this section of the ThreeFold Farmers book, we cover the essential farming requirements when it comes to ThreeFold 3Node hardware.
The essential information are available in the section [3Node Requirements Summary](#3node-requirements-summary).
## Hardware Requirements
You need a theoretical minimum of 500 GB of SSD and 2 GB of RAM on a mini pc, desktop or server. In short, for peak optimization, aim for 100 GB of SSD and 8GB of RAM per thread. (Thread is equivalent to virtual core or logical core.)
Also, TFDAO might implement a farming parameter based on [passmark](https://www.cpubenchmark.net/cpu_list.php). From the ongoing discussion on the Forum, you should aim at a CPU mark of 1000 and above per core.
> 3Node optimal farming hardware ratio -> 100 GB of SSD + 8 GB of RAM per Virtual Core
Note that you can run Zero-OS on a Virtual Machine (VM), but you won't farm any TFT from this process. To farm TFT, Zero-OS needs to be on bare metal.
Also, note that ThreeFold runs its own OS, which is Zero-OS. You thus need to start with completely wiped disks. You cannot farm TFT with Windows, Linux or MAC OS installed on your disks. If you need to use such OS temporarily, boot it in Try mode with a removable media (USB key).
Note: Once you have the necessary hardware, you need to [create a farm](./1_create_farm.md), [create a Zero-OS bootstrap image](./2_bootstrap_image.md), [wipe your disks](./4_wipe_all_disks.md) and [set the BIOS/UEFI](./5_set_bios_uefi.md) . Then you can [boot your 3Node](./6_boot_3node.md). If you are planning in building a farm in data center, [read this section](../advanced_networking/advanced_networking_toc.md).
### 3Node Requirements Summary
Any computer with the following specifications can be used as a DIY 3Node.
- Any 64-bit hardware with an Intel or AMD processor chip.
- Servers, desktops and mini computers type hardware are compatible.
- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required.
- A ratio of 100GB of SSD and 8GB of RAM per thread is recommended.
- A wired ethernet connection is highly recommended to maximize reliability and the ability to farm TFT.
- A [passmark](https://www.passmark.com/) of 1000 per core is recommended and will probably be a minimum requirement in the future.
*A passmark of 1000 per core is recommend and will be a minimum requirement in the future. This is not yet an official requirement. A 3Node with less than 1000 passmark per core of CPU would not be penalized if it is registered before the DAO settles the [Passmark Question](https://forum.threefold.io/t/cpu-benchmarking-for-reward-calculations/2479).
## Bandwidth Requirements
<!---
This section should be checked and validated with the TF Team. We can change the constant (here it's 10) if needed. Or use another equation if this one is deemed suboptimal. This equation is an attempt at a synthesis of the discussions we had on the TF Forum.
-->
A 3Node connects to the ThreeFold Grid and transfers information, whether it is in the form of compute, storage or network units (CU, SU, NU respectively). The more resources your 3Nodes offer to the Grid, the more bandwidth will be needed to transfer the additional information. In this section, we cover general guidelines to make sure you have enough bandwidth on the ThreeFold Grid when utilization will be happening.
Note that the TFDAO will need to discuss and settle on clearer guidelines in the near future. For now, we propose those general guidelines. Being aware of these numbers as you build and scale your ThreeFold farm will set you in the proper direction.
> **The strict minimum for one Titan is 1 mbps of bandwidth**.
If you want to expand your ThreeFold farm, you should check the following to make sure your bandwidth will be sufficient when there will be Grid utilization.
**Bandwidth per 3Node Equation**
> min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2)
This equation means that for each TB of HDD you need 5 mbps of bandwidth, and for each TB of SSD, 8 Threads and 64GB of RAM (whichever is higher), you need 10 mbps of bandwidth.
This means a proper bandwidth for a Titan would be 10 mbps. As stated, 1 mbps is the strict minimum for one Titan.
## Link to Share Farming Setup
If you want ideas and suggestions when it comes to building DIY 3Nodes, a good place to start is by checking what other farmers have built. [This post on the Forum](https://forum.threefold.io/t/lets-share-our-farming-setup/286) is a great start. The following section also contains great DIY 3Node ideas.
## Powering the 3Node
### Surge Protector
A surge protector is highly recommended for your farm and your 3Nodes. This ensures your 3Nodes will not overcharge if a power surge happens. Whole-house surge protectors are also an option.
### Power Distribution Unit (PDU)
A PDU (power distribution unit) is useful in big server settings in order to manage your wattage and keep track of your power consumption.
### Uninterrupted Power Supply (UPS)
A UPS (uninterrupted power supply) is great for a 3Node if your power goes on and off frequently for short periods of time. This ensures your 3Node does not need to constantly reboot. If your electricity provider is very reliable, a UPS might not be needed, as the small downtime resulting from rare power outages with not exceed the DIY downtime limit*. (95% uptime, 5% downtime = 36 hours per month.) Of course, for greater Grid utilization experience, considering adding a UPS to your ThreeFold farm can be highly beneficial.
Note: Make sure to have AC Recovery Power set properly so your 3Node goes back online if power shutdowns momentarily. UPS are generally used in data center to make sure people have enough time to do a "graceful" shutdown of the units when power goes off. In the case of 3Nodes, they do not need graceful shutdowns as Zero-OS cannot lose data while functioning. The only way to power down a 3Node is simply to turn it off directly on the machine.
### Generator
A generator will be needed for very large installation with or without an unsteady main power supply.
## Connecting the 3Node to the Internet
As a general consideration, to connect a 3Node to the Internet, you must use an Ethernet cable and set DHCP as a network management protocol. Note that WiFi is not supported with ThreeFold farming.
The general route from the 3Node to the Internet is the following:
> 3Node -> Switch (optional) -> Router -> Modem
Note that most home routers come with a built-in switch to provide multiple Ethernet ports. Using a stand-alone switch is optional, but can come quite handy when farmers have many 3Nodes.
### Z-OS and Switches
Switches can be managed or unmanaged. Managed switches come with managed features made available to the user (typically more of such features on premium models).
Z-OS can work with both types of switches. As long as there's a router reachable on the other end offering DHCP and a route to the public internet, it's not important what's in between. Generally speaking, switches are more like cables, just part of the pipes that connect devices in a network.
We present a general overview of the two types of switches.
**Unmanaged Switches**
Unmanaged are the most common type and if someone just says "switch" this is probably what they mean. These switches just forward traffic along to its destination in a plug and play manner with no configuration. When a switch is connected to a router, you can think of the additional free ports on the switch as essentially just extra ports on the router. It's a way to expand the available ports and sometimes also avoid running multiple long cables. My nodes are far from my router, so I run a single long ethernet cable to a switch next to the nodes and then use multiple shorter cables to connect from the switch to the nodes.
**Managed Switches**
Managed switches have more capabilities than unmanaged switches and they are not very common in home settings (at least not as standalone units). Some of our farmers do use managed switches. These switches offer much more control and also require configuration. They can enable advanced features like virtual LANs to segment the network.
## Using Onboard Storage (3Node Servers)
If your 3Node is based on a server, you can either use PCIe slots and PCIe-NVME adapter to install SSD NVME disk, or you can use the onboard storage.
Usually, servers use RAID technology for onboard storage. RAID is a technology that has brought resilience and security to the IT industry. But it has some limitations that ThreeFold did not want to get stuck with. ThreeFold developed a different and more efficient way to [store data reliably](https://library.threefold.me/info/threefold#/cloud/threefold__cloud_products?id=storage-quantum-safe-filesystem). This Quantum Safe Storage overcomes some of the shortfalls of RAID and is able to work over multiple nodes geographically spread on the TF Grid. This means that there is no RAID controller in between data storage and the TF Grid.
For your 3Nodes, you want to bypass RAID in order for Zero-OS to have bare metals on the system.
To use onboard storage on a server without RAID, you can
1. [Re-flash](https://fohdeesha.com/docs/perc.html) the RAID card
2. Turn on HBA/non-RAID mode
3. Install a different card.
For HP servers, you simply turn on the HBA mode (Host Bus Adapter).
For Dell servers, you can either cross, or [re-flash](https://fohdeesha.com/docs/perc.html), the RAID controller with an “IT-mode-Firmware” (see this [video](https://www.youtube.com/watch?v=h5nb09VksYw)) or get a DELL H310-controller (which has the non-RAID option). Otherwise, you can install a NVME SSD with a PCIe adaptor, and turn off the RAID controller.
Once the disks are wiped, you can shutdown your 3Node and remove the Linux Bootstrap Image (USB key). Usually, there will be a message telling you when to do so.
## Upgrading a DIY 3Node
As we've seen in the [List of Common DIY 3Nodes](#list-of-common-diy-3nodes), it is sometimes necessary, and often useful, to upgrade your hardware.
**Type of upgrades possible**
- Add TBs of SSD/HDD
- Add RAM
- Change CPU
- Change BIOS battery
- Change fans
For some DIY 3Node, no upgrades are required and this constitutes a good start if you want to explore DIY building without going into too much additional steps.
For in-depth videos on how to upgrade mini-pc and rack servers, watch these great [DIY videos](https://www.youtube.com/user/floridanelson).

View File

@ -0,0 +1,14 @@
<h1> Building a DIY 3Node </h1>
This section of the ThreeFold Farmers book presents the necessary and basic steps to build a DIY 3Node.
For advanced farming information, such as GPU farming and room parameters, refer to the section [Farming Optimization](../farming_optimization/farming_optimization.md).
<h2> Table of Contents </h2>
- [1. Create a Farm](./1_create_farm.md)
- [2. Create a Zero-OS Bootstrap Image](./2_bootstrap_image.md)
- [3. Set the Hardware](./3_set_hardware.md)
- [4. Wipe All the Disks](./4_wipe_all_disks.md)
- [5. Set the BIOS/UEFI](./5_set_bios_uefi.md)
- [6. Boot the 3Node](./6_boot_3node.md)

View File

@ -0,0 +1,106 @@
<h1> 4. Wipe All the Disks </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Main Steps](#main-steps)
- [1. Create a Linux Bootstrap Image](#1-create-a-linux-bootstrap-image)
- [2. Boot Linux in *Try Mode*](#2-boot-linux-in-try-mode)
- [3. Use wipefs to Wipe All the Disks](#3-use-wipefs-to-wipe-all-the-disks)
- [Troubleshooting](#troubleshooting)
***
## Introduction
In this section of the ThreeFold Farmers book, we explain how to wipe all the disks of your 3Node.
## Main Steps
It only takes a few steps to wipe all the disks of a 3Node.
1. Create a Linux Bootstrap Image
2. Boot Linux in *Try Mode*
3. Wipe All the Disks
ThreeFold runs its own OS, which is Zero-OS. You thus need to start with completely wiped disks. Note that ALL disks must be wiped. Otherwise, Zero-OS won't boot.
An easy method is to simply download a Linux distribution and wipe the disk with the proper command line in the Terminal.
We will show how to do this with Ubuntu 20.04. LTS. This distribution is easy to use and it is thus a good introduction for Linux, in case you haven't yet explored this great operating system.
## 1. Create a Linux Bootstrap Image
Download the Ubuntu 20.04 ISO file [here](https://releases.ubuntu.com/20.04/) and burn the ISO image on a USB key. Make sure you have enough space on your USB key. You can also use other Linux Distro such as [GRML](https://grml.org/download/), if you want a lighter ISO image.
The process here is the same as in section [Burning the Bootstrap Image](./2_bootstrap_image.md#burn-the-zero-os-bootstrap-image), but with the Linux ISO instead of the Zero-OS ISO. [BalenaEtcher](https://www.balena.io/etcher/) is recommended as it formats your USB in the process, and it is available for MAC, Windows and Linux.
## 2. Boot Linux in *Try Mode*
When you boot the Linux ISO image, make sure to choose *Try Mode*. Otherwise, it will install Linux on your computer. You do not want this.
## 3. Use wipefs to Wipe All the Disks
When you use wipefs, you are removing all the data on your disk. Make sure you have no important data on your disks, or make sure you have copies of your disks before doing this operation, if needed.
Once Linux is booted, go into the terminal and write the following command lines.
First, you can check the available disks by writing in a terminal or in a shell:
```
lsblk
```
To see what disks are connected, write this command:
```
fdisk -l
```
If you want to wipe one specific disk, here we use *sda* as an example, write this command:
```
sudo wipefs -a /dev/sda
```
And replace the "a" in sda by the letter of your disk, as shown when you did *lsblk*. The term *sudo* gives you the correct permission to do this.
To wipe all the disks in your 3Node, write the command:
```
sudo for i in /dev/sd*; do wipefs -a $i; done
```
If you have any `fdisk` entries that look like `/dev/nvme`, you'll need to adjust the command line.
For a nvme disk, here we use *nvme0* as an example, write:
```
sudo wipefs -a /dev/nvme0
```
And replace the "0" in nvme0 by the number corresponding to your disk, as shown when you did *lsblk*.
To wipe all the nvme disks, write this command line:
```
sudo for i in /dev/nvme*; do wipefs -a $i; done
```
## Troubleshooting
If you're having issues wiping the disks, you might need to use **--force** or **-f** with wipefs (e.g. **sudo wipefs -af /dev/sda**).
If you're having trouble getting your disks recognized by Zero-OS, some farmers have had success enabling AHCI mode for SATA in their BIOS.
If you are using a server with onboard storage, you might need to [re-flash the RAID card](../../faq/faq.md#is-there-a-way-to-bypass-raid-in-order-for-zero-os-to-have-bare-metals-on-the-system-no-raid-controller-in-between-storage-and-the-grid).

View File

@ -0,0 +1,172 @@
<h1> 5. Set the BIOS/UEFI </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Z-OS and DHCP](#z-os-and-dhcp)
- [Regular Computer and 3Node Network Differences](#regular-computer-and-3node-network-differences)
- [Static IP Addresses](#static-ip-addresses)
- [The Essential Features of BIOS/UEFI for a 3Node](#the-essential-features-of-biosuefi-for-a-3node)
- [Setting the Remote Management of a Server with a Static IP Address (Optional)](#setting-the-remote-management-of-a-server-with-a-static-ip-address-optional)
- [Update the BIOS/UEFI firmware (Optional)](#update-the-biosuefi-firmware-optional)
- [Check the BIOS/UEFI version on Windows](#check-the-biosuefi-version-on-windows)
- [Check the BIOS/UEFI version on Linux](#check-the-biosuefi-version-on-linux)
- [Update the BIOS firmware](#update-the-bios-firmware)
- [Additional Information](#additional-information)
- [BIOS/UEFI and Zero-OS Bootstrap Image Combinations](#biosuefi-and-zero-os-bootstrap-image-combinations)
- [Troubleshoot](#troubleshoot)
***
## Introduction
In this section of the ThreeFold Farmers book, we explain how to properly set the BIOS/UEFI of your 3Node.
Note that the BIOS mode is usually needed for older hardware while the UEFI mode is usually needed for newer hardware, when it comes to booting properly Zero-OS on your DIY 3Node.
If it doubt, start with UEFI and if it doesn't work as expected, try with BIOS.
Before diving into the BIOS/UEFI settings, we will present some general considerations on Z-OS and DHCP.
## Z-OS and DHCP
The operating system running on the 3Nodes is called Zero-OS (Z-OS). When it comes to setting the proper network for your 3Node farm, you must use DHCP since Z-OS is going to request an IP from the DHCP server if there's one present, and it won't get network connectivity if there's no DHCP.
The Z-OS philosophy is to minimize configuration wherever possible, so there's nowhere to supply a static config when setting your 3Node network. Instead, the farmer is expected to provide DHCP.
While it is possible to set fixed IP addresses with the DHCP for the 3Nodes, it is recommended to avoid this and just set the DHCP normally without fixed IP addresses.
By setting DHCP in BIOS/UEFI, an IP address is automatically assigned by your router to your 3Node every time you boot it.
### Regular Computer and 3Node Network Differences
For a regular computer (not a 3Node), if you want to use a static IP in a network with DHCP, you'd first turn off DHCP and then set the static IP to an IP address outside the DHCP range. That being said, with Z-OS, there's no option to turn off DHCP and there's nowhere to set a static IP, besides public config and remote management. In brief, the farmer must provide DHCP, either on a private or a public range, for the 3Node to boot.
### Static IP Addresses
In the ThreeFold ecosystem, there are only two situations where you would work with static IP addresses: to set a public config to a 3Node or a farm, and to remotely manage your 3Nodes.
**Static IP and Public Config**
You can [set a static IP for the public config of a 3Node or a farm](./1_create_farm.md#optional-add-public-ip-addresses). In thise case, the 3Node takes information from TF Chain and uses it to set a static configuration on a NIC (or on a virtual NIC in the case of single NIC systems).
**Static IP and Remote Management**
You can [set a static IP address to remotely manage a 3Node](#setting-the-remote-management-of-a-server-static-ip-address).
## The Essential Features of BIOS/UEFI for a 3Node
There are certain things that you should make sure are set properly on your 3Node.
As a general advice, you can Load Defaults (Settings) on your BIOS, then make sure the options below are set properly.
* Choose the correct combination of BIOS/UEFI and bootstrap image on [https://bootstrap.grid.tf/](https://bootstrap.grid.tf/)
* Newer system will use UEFI
* Older system will use BIOS
* Hint: If your 3Node boot stops at *Initializing Network Devices*, try the other method (BIOS or UEFI)
* Set Multi-Processor and Hyperthreading at Enabled
* Sometimes, it will be written Virtual Cores, or Logical Cores.
* Set Virtualization at Enabled
* On Intel, it is denoted as CPU virtualization and on ASUS, it is denoted as SVM.
* Make sure virtualization is enabled and look for the precise terms in your specific BIOS/UEFI.
* Set AC Recovery at Last Power State
* This will make sure your 3Node restarts after losing power momentarily.
* Select the proper Boot Sequence for the 3Node to boot Zero-OS from your bootstrap image
* e.g., if you have a USB key as a bootstrap image, select it in Boot Sequence
* Set Server Lookup Method (or the equivalent) at DNS. Only use Static IP if you know what you are doing.
* Your router will assign a dynamic IP address to your 3Node when it connects to Internet.
* Set Client Address Method (or the equivalent) at DHCP. Only use Static IP if you know what you are doing.
* Your router will assign a dynamic IP address to your 3Node when it connects to Internet.
* Secure Boot should be left at disabled
* Enable it if you know what you are doing. Otherwise, it can be set at disabled.
## Setting the Remote Management of a Server with a Static IP Address (Optional)
Note from the list above that by enabling the DHCP and DNS in BIOS, dynamic IP addresses will be assigned to 3Nodes. This way, you do not need any specific port configuration when booting a 3Node.
As long as the 3Node is connected to the Internet via an ethernet cable (WiFi is not supported), Zero-OS will be able to boot. By setting DHCP in BIOS, an IP address is automatically assigned to your 3Node every time you boot it. This section concerns 3Node servers with remote management functions and interfaces.
You can set up a node through static routing at the router without DHCP by assigning the MAC address of the NIC to a IP address within your private subnet. This will give a static IP address to your 3Node.
With a static IP address, you can then configure remote management on servers. For Dell, [iDRAC](https://www.dell.com/support/kbdoc/en-us/000134243/how-to-setup-and-manage-your-idrac-or-cmc-for-dell-poweredge-servers-and-blades) is used, and for HP, [ILO](https://support.hpe.com/hpesc/public/docDisplay?docId=a00045463en_us&docLocale=en_US) is used.
## Update the BIOS/UEFI firmware (Optional)
Updating the BIOS firmware is not always necessary, but to do so can help prevent future errors and troubleshootings. Making sure the Date and Time are set correctly can also help the booting process.
Note: updating the BIOS/UEFI firmware is optional, but recommended.
### Check the BIOS/UEFI version on Windows
Hit *Start*, type in *cmd* in the search box and click on *Command Prompt*. Write the line
> wmic bios get smbiosbiosversion
This will give you the BIOS or UEFI firmware of your PC.
### Check the BIOS/UEFI version on Linux
Simply type the following command
> sudo dmidecode | less
or this line:
> sudo dmidecode -s bios-version
### Update the BIOS firmware
1. On the manufacturer's website, download the latest BIOS/UEFI firmware
2. Put the file on a USB flash drive (+unzip if necessary)
3. Restart your hardware and enter the BIOS/UEFI settings
4. Navigate the menus to update the BIOS/UEFI
## Additional Information
### BIOS/UEFI and Zero-OS Bootstrap Image Combinations
To properly boot the Zero-OS image, you can either use an image made for a BIOS system or a UEFI system, this depends on your system.
BIOS is older technology. It means *Basic Input/Output System*.
UEFI is newer technology. It means *Unified Extensible Firmware Interface*. BIOS/UEFI is, in a way, the link between the hardware and the software of your computer.
In general, setting a 3Node is similar whether it is with a BIOS or UEFI system. The important is to choose the correct combination of boot media and boot mode (BIOS/UEFI).
The bootstrap images are available [here](https://bootstrap.grid.tf/).
The choices are:
1. EFI IMG - UEFI
2. EFI FILE - UEFI
3. iPXE - Boot from network
4. ISO - BIOS
5. USB - BIOS
6. LKRN - Boot from network
Choices 1 and 2 are for UEFI (newer models).
Choices 4 and 5 are for BIOS (newer models).
Choices 3 and 6 are mainly for network boot.
Refer to [this previous section](./2_bootstrap_image.md) for more information on creating a Zero-OS bootstrap image.
For information on how to boot Zero-OS with iPXE, read [this section](./6_boot_3node.md#advanced-booting-methods-optional).
### Troubleshoot
You might have to try UEFI first and if it doesn't work, try BIOS. Usually when this is the case (UEFI doesn't work with your current computer), the following message will be shown:
> Initializing Network Devices...
And then... nothing. This means that you are still in the BIOS of the hardware and boot is not even started yet. When this happens, try the BIOS mode of your computer.

View File

@ -0,0 +1,169 @@
<h1> 6. Boot the 3Node </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [1. Booting the 3Node with Zero-OS](#1-booting-the-3node-with-zero-os)
- [2. Check the 3Node Status Online](#2-check-the-3node-status-online)
- [3. Receive the Farming Rewards](#3-receive-the-farming-rewards)
- [Advanced Booting Methods (Optional)](#advanced-booting-methods-optional)
- [PXE Booting with OPNsense](#pxe-booting-with-opnsense)
- [PXE Booting with pfSense](#pxe-booting-with-pfsense)
- [Booting Issues](#booting-issues)
- [Multiple nodes can run with the same node ID](#multiple-nodes-can-run-with-the-same-node-id)
***
## Introduction
We explain how to boot the 3Node with the Zero-OS bootstrap image with a USB key. We also include optional advanced booting methods using OPNSense and pfSense.
One of the great features of Zero-OS is that it can be completely run within the cache of your 3Node. Indeed, the booting device that contains your farm ID will connect to the ThreeFold Grid and download everything needed to run smoothly. There are many benefits in terms of security and protection of data that comes with this.
## 1. Booting the 3Node with Zero-OS
To boot Zero-OS, insert your Zero-OS bootstrap image USB key, power on your computer and choose the right booting sequence and parameters ([BIOS or UEFI](./5_set_bios_uefi.md)) in your BIOS/UEFI settings. Then, restart the 3Node. Zero-OS should boot automatically.
Note that you need an ethernet cable connected to your router or switch. You cannot farm on the ThreeFold Grid with Wifi.
The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes.
If time passes (an hour and more) and the node does not get registered, in many cases, [wiping the disks](./4_wipe_all_disks.md) all over again and trying another reboot usually resolves this issue.
Once you have your node ID, you can also go on the ThreeFold Dashboard to see your 3Node and verify that your 3Node is online.
## 2. Check the 3Node Status Online
You can use the ThreeFold [Node Finder](../../dashboard/deploy/node_finder.md) to verify that your 3Node is online.
* [ThreeFold Main Net Dashboard](https://dashboard.grid.tf/)
* [ThreeFold Test Net Dashboard](https://dashboard.test.grid.tf/)
* [ThreeFold Dev Net Dashboard](https://dashboard.dev.grid.tf/)
* [ThreeFold QA Net Dashboard](https://dashboard.qa.grid.tf/)
## 3. Receive the Farming Rewards
The farming reward will be sent once per month at the address you gave when you set up your farm. You can review this process [here](./1_create_farm.md#add-a-stellar-address-for-payout).
That's it. You've now completed the necessary steps to build a DIY 3Node and to connect it to the Grid.
## Advanced Booting Methods (Optional)
### PXE Booting with OPNsense
> This documentation comes from the [amazing Network Booting Guide](https://forum.ThreeFold.io/t/network-booting-tutorial/2688) by @Fnelson on the ThreeFold Forum.
Network booting ditches your standard boot USB with a local server. This TFTP server delivers your boot files to your 3 nodes. This can be useful in bigger home farms, but is all but mandatory in a data center setup.
Network boot setup is quite easy and is centered about configuring a TFTP server. There are essentially 2 options for this, a small dedicated server such as a raspberry pi, or piggybacking on your pfsense or opnsense router. I would recommend the latter as it eliminates another piece of equipment and is probably more reliable.
**Setting Up Your Router to Allow Network Booting**
These steps are for OPNsense, PFsense may differ. These set are required regardless of where you have your TFTP server.
> Services>DHCPv4>LAN>Network Booting
Check “Enable Network Booting”
Enter the IP address of your TFTP server under “Set next-server IP”. This may be the routers IP or whatever device you are booting from.
Enter “pxelinux.0” under Set default bios filename.
Ignore the TFTP Server settings.
**TFTP server setup on a debian machine such as Ubuntu or Raspberry Pi**
> apt-get update
>
> apt-get install tftpd-hpa
>
> cd /srv/tftp/
>
> wget http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/netboot.tar.gz
>
> wget http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/pxelinux.0
>
> wget https://bootstrap.grid.tf/krn/prod/<FARMID> --no-check-certificate
>
> mv <FARMID> ipxe-prod.lkrn
>
> tar -xvzf netboot.tar.gz
>
> rm version.info netboot.tar.gz
>
> rm pxelinux.cfg/default
>
> chmod 777 /srv/tftp/pxelinux.cfg (optional if next step fails)
>
> echo 'default ipxe-prod.lkrn' >> pxelinux.cfg/default
**TFTP Server on a OPNsense router**
> Note: When using PFsense instead of OPNsense, steps are probably similar, but the directory or other small things may differ.
The first step is to download the TFTP server plugin. Go to system>firmware>Status and check for updates, follow prompts to install. Then click the Plugins tab and search for tftp, install os-tftp. Once that is installed go to Services>TFTP (you may need to refresh page). Check the Enable box and input your router ip (normally 192.168.1.1). Click save.
Turn on ssh for your router. In OPNsense it is System>Settings>Administration. Then check the Enable, root login, and password login. Hop over to Putty and connect to your router, normally 192.168.1.1. Login as root and input your password. Hit 8 to enter the shell.
In OPNsense the tftp directory is /usr/local/tftp
> cd /usr/local
>
> mkdir tftp
>
> cd ./tftp
>
> fetch http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/netboot.tar.gz
>
> fetch http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/pxelinux.0
>
> fetch https://bootstrap.grid.tf/krn/prod/<FARMID>
>
> mv <FARMID> ipxe-prod.lkrn
>
> tar -xvzf netboot.tar.gz
>
> rm version.info netboot.tar.gz
>
> rm pxelinux.cfg/default
>
> echo 'default ipxe-prod.lkrn' >> pxelinux.cfg/default
You can get out of shell by entering exit or just closing the window.
**3Node Setup**
Set the server to BIOS boot and put PXE or network boot as the first choice. At least on Dell machines, make sure you have the network cable in plug 1 or it wont boot.
### PXE Booting with pfSense
> This documentation comes from the [amazing Network Booting Guide](https://forum.threefold.io/t/network-booting-tutorial/2688/7) by @TheCaptain on the ThreeFold Forum.
These are the steps required to enable PXE booting on pfSense. This guide assumes youll be using the router as your PXE server; pfSense allows boot file uploads directly from its web GUI.
* Log into your pfSense instance
* Go to System>Package Manager
* Search and add tftpd package under Available Packages tab
* Go to Services>TFTP Server
* Under Settings tab check enable and enter the router IP in TFTP Server Bind IP field
* Switch to Files tab under Services>TFTP Server and upload your ipxe-prod.efi file acquired from https://v3.bootstrap.grid.tf/ (second option labeled EFI Kernel)
* Go to Services>DHCP Server
* Under Other Options section click Display Advance next to TFTP and enter router IP
* Click Display Advance next to Network Booting
* Check enable, enter router IP in Next Server field
* Enter ipxe-prod.efi in Default BIOS file name field
That's it! Youll want to ensure your clients are configured with boot priority set as IPv4 in first spot. You might need to disable secure boot and enable legacy boot within BIOS.
## Booting Issues
### Multiple nodes can run with the same node ID
This is a [known issue](https://github.com/threefoldtech/info_grid/issues/122) and will be resolved once the TPM effort gets finalized.

View File

@ -0,0 +1,72 @@
<h1>GPU Farming</h1>
Welcome to the *GPU Farming* section of the ThreeFold Manual!
In this guide, we delve into the realm of GPU farming, shedding light on the significance of Graphics Processing Units (GPUs) and how they can be seamlessly integrated into the ThreeFold ecosystem.
<h2>Table of Contents</h2>
- [Understanding GPUs](#understanding-gpus)
- [Get Started](#get-started)
- [Install the GPU](#install-the-gpu)
- [GPU Node and the Farmerbot](#gpu-node-and-the-farmerbot)
- [Set a Price for the GPU Node](#set-a-price-for-the-gpu-node)
- [Check the GPU Node on the Node Finder](#check-the-gpu-node-on-the-node-finder)
- [Reserving the GPU Node](#reserving-the-gpu-node)
- [Questions and Feedback](#questions-and-feedback)
***
## Understanding GPUs
A Graphics Processing Unit, or GPU, is a specialized electronic circuit designed to accelerate the rendering of images and videos. Originally developed for graphics-intensive tasks in gaming and multimedia applications, GPUs have evolved into powerful parallel processors with the ability to handle complex computations, such as 3D rendering, AI and machine learning.
In the context of ThreeFold, GPU farming involves harnessing the computational power of Graphics Processing Units to contribute to the decentralized grid. This empowers users to participate in the network's mission of creating a more equitable and efficient internet infrastructure.
## Get Started
In this guide, we focus on the integration of GPUs with a 3Node, the fundamental building block of the ThreeFold Grid. The process involves adding a GPU to enhance the capabilities of your node, providing increased processing power and versatility for a wide range of tasks. Note that any Nvidia or AMD graphics card should work as long as it's supported by the system.
## Install the GPU
We cover the basic steps to install the GPU on your 3Node.
* Find a proper GPU model for your specific 3Node hardware
* Install the GPU on the server
* Note: You might need to move or remove some pieces of your server to make room for the GPU
* (Optional) Boot the 3Node with a Linux distro (e.g. Ubuntu) and use the terminal to check if the GPU is recognized by the system
* ```
sudo lshw -C Display
```
* Output example with an AMD Radeon (on the line `product: ...`)
![gpu_farming](./img/cli_display_gpu.png)
* Boot the 3Node with the ZOS bootstrap image
## GPU Node and the Farmerbot
If you are using the Farmerbot, it might be a good idea to first boot the GPU node without the Farmerbot (i.e. to remove the node in the config file and restart the Farmerbot). Once you've confirmed that the GPU is properly detected by TFChain, you can then put back the GPU node in the config file and restart the Farmerbot. While this is not necessary, it can be an effective way to test the GPU node separately.
## Set a Price for the GPU Node
You can [set additional fees](../farming_optimization/set_additional_fees.md) for your GPU dedicated node on the [TF Dashboard](https://dashboard.grid.tf/).
When a user reserves your 3Node as a dedicated node, you will receive TFT payments once every 24 hours. These TFT payments will be sent to the TFChain account of your farm's twin.
## Check the GPU Node on the Node Finder
You can use the [Node Finder](../../dashboard/deploy/node_finder.md) on the [TF Dashboard](https://dashboard.grid.tf/) to verify that the node is displayed as having a GPU.
* On the Dashboard, go to the Node Finder
* Under **Node ID**, write the node ID of the GPU node
* Once the results are displayed, you should see **1** under **GPU**
* If you are using the Status bot, you might need to change the node status under **Select Nodes Status** (e.g. **Down**, **Standby**) to see the node's information
> Note: It can take some time for the GPU parameter to be displayed.
## Reserving the GPU Node
Now, users can reserve the node in the **Dedicated Nodes** section of the Dashboard and then deploy workloads using the GPU. For more information, read [this documentation](../../dashboard/deploy/dedicated_machines.md).
## Questions and Feedback
If you have any questions or feedback, we invite you to discuss with the ThreeFold community on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmer chat](https://t.me/threefoldfarmers) on Telegram.

View File

@ -0,0 +1 @@
farming_30.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Some files were not shown because too many files have changed in this diff Show More