manual developers added content before proxy

This commit is contained in:
mik-tf 2024-04-15 21:12:29 +00:00
parent 037937df0c
commit 99c05100c3
17 changed files with 2337 additions and 0 deletions

View File

@ -0,0 +1,110 @@
<h1> Capacity Planning </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
It's almost the same as in [deploying a single VM](../javascript/grid3_javascript_vm.md) the only difference is you can automate the choice of the node to deploy on using code. We now support `FilterOptions` to filter nodes based on specific criteria e.g the node resources (CRU, SRU, HRU, MRU) or being part of a specific farm or located in some country, or being a gateway or not
## Example
```ts
FilterOptions: { accessNodeV4?: boolean; accessNodeV6?: boolean; city?: string; country?: string; cru?: number; hru?: number; mru?: number; sru?: number; farmId?: number; farmName?: string; gateway?: boolean; publicIPs?: boolean; certified?: boolean; dedicated?: boolean; availableFor?: number; page?: number;}
```
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "dynamictest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "dynamicDisk";
disk.size = 8;
disk.mountpoint = "/testdisk";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 2, // GB
sru: 9,
country: "Belgium",
availableFor: grid3.twinId,
};
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choise
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 1;
vm.memory = 1024 * 2;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm.entrypoint = "/sbin/zinit init";
vm.env = {
SSH_KEY: config.ssh_key,
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "dynamicVMS";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
In this example you can notice the criteria for `server1`
```typescript
const server1_options: FilterOptions = {
cru: 1,
mru: 2, // GB
sru: 9,
country: "Belgium",
availableFor: grid3.twinId,
};
```
Here we want all the nodes with `CRU:1`, `MRU:2`, `SRU:9`, located in `Belgium` and available for me (not rented for someone else).
> Note some libraries allow reverse lookup of countries codes by name e.g [i18n-iso-countries](https://www.npmjs.com/package/i18n-iso-countries)
and then in the MachineModel, we specified the `node_id` to be the first value of our filteration
```typescript
vm.node_id = +(await nodes.filterNodes(server1_options))[0].nodeId;
```

View File

@ -0,0 +1,232 @@
<h1> Deploy CapRover </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Leader Node](#leader-node)
- [Code Example](#code-example)
- [Environment Variables](#environment-variables)
- [Worker Node](#worker-node)
- [Code Example](#code-example-1)
- [Environment Variables](#environment-variables-1)
- [Questions and Feedback](#questions-and-feedback)
***
## Introduction
In this section, we show how to deploy CapRover with the Javascript client.
This deployment is very similar to what we have in the section [Deploy a VM](./grid3_javascript_vm.md), but the environment variables are different.
## Leader Node
We present here a code example and the environment variables to deploy a CapRover Leader node.
For further details about the Leader node deployment, [read this documentation](https://github.com/freeflowuniverse/freeflow_caprover#a-leader-node-deploymentsetup).
### Code Example
```ts
import {
DiskModel,
FilterOptions,
MachineModel,
MachinesModel,
NetworkModel,
} from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const vmQueryOptions: FilterOptions = {
cru: 4,
mru: 4, // GB
sru: 10,
farmId: 1,
};
const CAPROVER_FLIST =
"https://hub.grid.tf/tf-official-apps/tf-caprover-latest.flist";
// create network Object
const n = new NetworkModel();
n.name = "wedtest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "wedDisk";
disk.size = 10;
disk.mountpoint = "/var/lib/docker";
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
vm.disks = [disk];
vm.public_ip = true;
vm.planetary = false;
vm.cpu = 4;
vm.memory = 1024 * 4;
vm.rootfs_size = 0;
vm.flist = CAPROVER_FLIST;
vm.entrypoint = "/sbin/zinit init";
vm.env = {
PUBLIC_KEY: config.ssh_key,
SWM_NODE_MODE: "leader",
CAPROVER_ROOT_DOMAIN: "rafy.grid.tf", // update me
DEFAULT_PASSWORD: "captain42",
CAPTAIN_IMAGE_VERSION: "latest",
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "newVMS5";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "caprover leader machine/node";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
log(
`You can access Caprover via the browser using: https://captain.${vm.env.CAPROVER_ROOT_DOMAIN}`
);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
### Environment Variables
- PUBLIC_KEY: Your public IP to be able to access the VM.
- SWM_NODE_MODE: Caprover Node type which must be `leader` as we are deploying a leader node.
- CAPROVER_ROOT_DOMAIN: The domain which you we will use to bind the deployed VM.
- DEFAULT_PASSWORD: Caprover default password you want to deploy with.
## Worker Node
We present here a code example and the environment variables to deploy a CapRover Worker node.
Note that before deploying the Worker node, you should check the following:
- Get the Leader node public IP address.
- The Worker node should join the cluster from the UI by adding public IP address and the private SSH key.
For further information, [read this documentation](https://github.com/freeflowuniverse/freeflow_caprover#step-4-access-the-captain-dashboard).
### Code Example
```ts
import {
DiskModel,
FilterOptions,
MachineModel,
MachinesModel,
NetworkModel,
} from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const vmQueryOptions: FilterOptions = {
cru: 4,
mru: 4, // GB
sru: 10,
farmId: 1,
};
const CAPROVER_FLIST =
"https://hub.grid.tf/tf-official-apps/tf-caprover-latest.flist";
// create network Object
const n = new NetworkModel();
n.name = "wedtest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "wedDisk";
disk.size = 10;
disk.mountpoint = "/var/lib/docker";
// create vm node Object
const vm = new MachineModel();
vm.name = "capworker1";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
vm.disks = [disk];
vm.public_ip = true;
vm.planetary = false;
vm.cpu = 4;
vm.memory = 1024 * 4;
vm.rootfs_size = 0;
vm.flist = CAPROVER_FLIST;
vm.entrypoint = "/sbin/zinit init";
vm.env = {
// These env. vars needed to be changed based on the leader node.
PUBLIC_KEY: config.ssh_key,
SWM_NODE_MODE: "worker",
LEADER_PUBLIC_IP: "185.206.122.157",
CAPTAIN_IMAGE_VERSION: "latest",
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "newVMS6";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "caprover worker machine/node";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
### Environment Variables
The deployment of the Worker node is similar to the deployment of the Leader node, with the exception of the environment variables which differ slightly.
- PUBLIC_KEY: Your public IP to be able to access the VM.
- SWM_NODE_MODE: Caprover Node type which must be `worker` as we are deploying a worker node.
- LEADER_PUBLIC_IP: Leader node public IP.
## Questions and Feedback
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@ -0,0 +1,91 @@
<h1> GPU Support and JavaScript </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We present here a quick introduction to GPU support with JavaScript.
There are a couple of updates regarding finding nodes with GPU, querying node for GPU information and deploying with support of GPU.
This is an ongoing development and this section will be updated as new information comes in.
## Example
Here is an example script to deploy with GPU support:
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "vmgpuNetwork";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "vmgpuDisk";
disk.size = 100;
disk.mountpoint = "/testdisk";
const vmQueryOptions: FilterOptions = {
cru: 8,
mru: 16, // GB
sru: 100,
availableFor: grid3.twinId,
hasGPU: true,
rentedBy: grid3.twinId,
};
// create vm node Object
const vm = new MachineModel();
vm.name = "vmgpu";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 8;
vm.memory = 1024 * 16;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist";
vm.entrypoint = "/";
vm.env = {
SSH_KEY: config.ssh_key,
};
vm.gpu = ["0000:0e:00.0/1002/744c"]; // gpu card's id, you can check the available gpu from the dashboard
// create VMs Object
const vms = new MachinesModel();
vms.name = "vmgpu";
vms.network = n;
vms.machines = [vm];
vms.metadata = "";
vms.description = "test deploying VM with GPU via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// delete
const d = await grid3.machines.delete({ name: vms.name });
log(d);
await grid3.disconnect();
}
main();
```

View File

@ -0,0 +1,124 @@
<h1>Installation</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [External Package](#external-package)
- [Local Usage](#local-usage)
- [Getting Started](#getting-started)
- [Client Configuration](#client-configuration)
- [Generate the Documentation](#generate-the-documentation)
- [How to Run the Scripts](#how-to-run-the-scripts)
- [Reference API](#reference-api)
***
## Introduction
We present here the general steps required to install and use the ThreeFold Grid Client.
The [Grid Client](https://github.com/threefoldtech/tfgrid-sdk-ts/tree/development/packages/grid_client) is written using [TypeScript](https://www.typescriptlang.org/) to provide more convenience and type-checked code. It is used to deploy workloads like virtual machines, kubernetes clusters, quantum storage, and more.
## Prerequisites
To install the Grid Client, you will need the following on your machine:
- [Node.js](https://nodejs.org/en) ^18
- npm 8.2.0 or higher
- may need to install libtool (**apt-get install libtool**)
> Note: [nvm](https://nvm.sh/) is the recommended way for installing node.
To use the Grid Client, you will need the following on the TFGrid:
- A TFChain account
- TFT in your wallet
If it is not the case, please visit the [Get started section](../../system_administrators/getstarted/tfgrid3_getstarted.md).
## Installation
### External Package
To install the external package, simply run the following command:
```bash
yarn add @threefold/grid_client
```
> Note: For the **qa**, **test** and **main** networks, please use @2.1.1 version.
### Local Usage
To use the Grid Client locally, clone the repository then install the Grid Client:
- Clone the repository
- ```bash
git clone https://github.com/threefoldtech/tfgrid-sdk-ts
```
- Install the Grid Client
- With yarn
- ```bash
yarn install
```
- With npm
- ```bash
npm install
```
> Note: In the directory **grid_client/scripts**, we provided a set of scripts to test the Grid Client.
## Getting Started
You will need to set the client configuration either by setting the json file manually (**scripts/config.json**) or by using the provided script (**scripts/client_loader.ts**).
### Client Configuration
Make sure to set the client configuration properly before using the Grid Client.
- **network**: The network environment (**dev**, **qa**, **test** or **main**).
- **mnemonic**: The 12 words mnemonics for your account.
- Learn how to create one [here](../../dashboard/wallet_connector.md).
- **storeSecret**: This is any word that will be used for encrypting/decrypting the keys on ThreeFold key-value store.
- **ssh_key**: The public SSH key set on your machine.
> Note: Only networks can't be isolated, all projects can see the same network.
## Generate the Documentation
The easiest way to test the installation is to run the following command with either yarn or npm to generate the Grid Client documentation:
* With yarn
* ```
yarn run serve-docs
```
* With npm
* ```
npm run serve-docs
```
> Note: You can also use the command **yarn run** to see all available options.
## How to Run the Scripts
You can explore the Grid Client by testing the different scripts proposed in **grid_client/scripts**.
- Update your customized deployments specs if needed
- Run using [ts-node](https://www.npmjs.com/ts-node)
- With yarn
- ```bash
yarn run ts-node --project tsconfig-node.json scripts/zdb.ts
```
- With npx
- ```bash
npx ts-node --project tsconfig-node.json scripts/zdb.ts
```
## Reference API
While this is still a work in progress, you can have a look [here](https://threefoldtech.github.io/tfgrid-sdk-ts/packages/grid_client/docs/api/index.html).

View File

@ -0,0 +1,186 @@
<h1> Deploying a Kubernetes Cluster </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [Building network](#building-network)
- [Building nodes](#building-nodes)
- [Building cluster](#building-cluster)
- [Deploying](#deploying)
- [Getting deployment information](#getting-deployment-information)
- [Deleting deployment](#deleting-deployment)
***
## Introduction
We show how to deploy a Kubernetes cluster on the TFGrid with the Javascript client.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
```ts
import { FilterOptions, K8SModel, KubernetesNodeModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "monNetwork";
n.ip_range = "10.238.0.0/16";
n.addAccess = true;
const masterQueryOptions: FilterOptions = {
cru: 2,
mru: 2, // GB
sru: 2,
availableFor: grid3.twinId,
farmId: 1,
};
const workerQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
// create k8s node Object
const master = new KubernetesNodeModel();
master.name = "master";
master.node_id = +(await grid3.capacity.filterNodes(masterQueryOptions))[0].nodeId;
master.cpu = 1;
master.memory = 1024;
master.rootfs_size = 0;
master.disk_size = 1;
master.public_ip = false;
master.planetary = true;
// create k8s node Object
const worker = new KubernetesNodeModel();
worker.name = "worker";
worker.node_id = +(await grid3.capacity.filterNodes(workerQueryOptions))[0].nodeId;
worker.cpu = 1;
worker.memory = 1024;
worker.rootfs_size = 0;
worker.disk_size = 1;
worker.public_ip = false;
worker.planetary = true;
// create k8s Object
const k = new K8SModel();
k.name = "testk8s";
k.secret = "secret";
k.network = n;
k.masters = [master];
k.workers = [worker];
k.metadata = "{'testk8s': true}";
k.description = "test deploying k8s via ts grid3 client";
k.ssh_key = config.ssh_key;
// deploy
const res = await grid3.k8s.deploy(k);
log(res);
// get the deployment
const l = await grid3.k8s.getObj(k.name);
log(l);
// // delete
// const d = await grid3.k8s.delete({ name: k.name });
// log(d);
await grid3.disconnect();
}
main();
```
## Detailed explanation
### Building network
```typescript
// create network Object
const n = new NetworkModel();
n.name = "monNetwork";
n.ip_range = "10.238.0.0/16";
```
### Building nodes
```typescript
// create k8s node Object
const master = new KubernetesNodeModel();
master.name = "master";
master.node_id = +(await grid3.capacity.filterNodes(masterQueryOptions))[0].nodeId;
master.cpu = 1;
master.memory = 1024;
master.rootfs_size = 0;
master.disk_size = 1;
master.public_ip = false;
master.planetary = true;
// create k8s node Object
const worker = new KubernetesNodeModel();
worker.name = "worker";
worker.node_id = +(await grid3.capacity.filterNodes(workerQueryOptions))[0].nodeId;
worker.cpu = 1;
worker.memory = 1024;
worker.rootfs_size = 0;
worker.disk_size = 1;
worker.public_ip = false;
worker.planetary = true;
```
### Building cluster
Here we specify the cluster project name, cluster secret, network model to be used, master and workers nodes and sshkey to access them
```ts
// create k8s Object
const k = new K8SModel();
k.name = "testk8s";
k.secret = "secret";
k.network = n;
k.masters = [master];
k.workers = [worker];
k.metadata = "{'testk8s': true}";
k.description = "test deploying k8s via ts grid3 client";
k.ssh_key = config.ssh_key;
```
### Deploying
use `deploy` function to deploy the kubernetes project
```ts
const res = await grid3.k8s.deploy(k);
log(res);
```
### Getting deployment information
```ts
const l = await grid3.k8s.getObj(k.name);
log(l);
```
### Deleting deployment
```ts
const d = await grid3.k8s.delete({ name: k.name });
log(d);
```

View File

@ -0,0 +1,101 @@
<h1>Using TFChain KVStore</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [setting values](#setting-values)
- [getting key](#getting-key)
- [listing keys](#listing-keys)
- [deleting key](#deleting-key)
***
## Introduction
As part of the tfchain, we support a keyvalue store module that can be used for any value within `2KB` range. practically it's used to save the user configurations state, so it can be built up again on any machine, given they used the same mnemonics and same secret.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
```ts
import { getClient } from "./client_loader";
import { log } from "./utils";
/*
KVStore example usage:
*/
async function main() {
//For creating grid3 client with KVStore, you need to specify the KVStore storage type in the pram:
const gridClient = await getClient();
//then every module will use the KVStore to save its configuration and restore it.
// also you can use it like this:
const db = gridClient.kvstore;
// set key
const key = "hamada";
const exampleObj = {
key1: "value1",
key2: 2,
};
// set key
await db.set({ key, value: JSON.stringify(exampleObj) });
// list all the keys
const keys = await db.list();
log(keys);
// get the key
const data = await db.get({ key });
log(JSON.parse(data));
// remove the key
await db.remove({ key });
await gridClient.disconnect();
}
main();
```
### setting values
`db.set` is used to set key to any value `serialized as string`
```ts
await db.set({ key, value: JSON.stringify(exampleObj) });
```
### getting key
`db.get` is used to get a specific key
```ts
const data = await db.get({ key });
log(JSON.parse(data));
```
### listing keys
`db.list` is used to list all the keys.
```ts
const keys = await db.list();
log(keys);
```
### deleting key
`db.remove` is used to delete a specific key.
```ts
await db.remove({ key });
```

View File

@ -0,0 +1,68 @@
<h1> Grid3 Client</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Client Configurations](#client-configurations)
- [Creating/Initializing The Grid3 Client](#creatinginitializing-the-grid3-client)
- [What is `rmb-rs` | Reliable Message Bus --rust](#what-is-rmb-rs--reliable-message-bus---rust)
- [Grid3 Client Options](#grid3-client-options)
## Introduction
Grid3 Client is a client used for deploying workloads (VMs, ZDBs, k8s, etc.) on the TFGrid.
## Client Configurations
so you have to set up your configuration file to be like this:
```json
{
"network": "dev",
"mnemonic": "<Your mnemonic>",
"storeSecret": "secret",
"ssh_key": ""
}
```
## Creating/Initializing The Grid3 Client
```ts
async function getClient(): Promise<GridClient> {
const gridClient = new GridClient({
network: "dev", // can be dev, qa, test, main, or custom
mnemonic: "<add your mnemonic here>",
});
await gridClient.connect();
return gridClient;
}
```
The grid client uses `rmb-rs` tool to send requests to/from nodes.
## What is `rmb-rs` | Reliable Message Bus --rust
Reliable message bus is a secure communication panel that allows bots to communicate together in a chat like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT.
Out of the box RMB provides the following:
- Guarantee authenticity of the messages. You are always sure that the received message is from whoever is pretending to be
- End to End encryption
- Support for 3rd party hosted relays. Anyone can host a relay and people can use it safely since there is no way messages can be inspected while
using e2e. That's similar to home servers by matrix
## Grid3 Client Options
- network: `dev` for devnet, `test` for testnet
- mnemonics: used for signing the requests.
- storeSecret: used to encrypt data while storing in backend. It's any word that will be used for encrypting/decrypting the keys on threefold key-value store. If left empty, the Grid client will use the mnemonics as the storeSecret.
- BackendStorage : can be `auto` which willl automatically adapt if running in node environment to use `filesystem backend` or the browser enviornment to use `localstorage backend`. Also you can set it to `kvstore` to use the tfchain keyvalue store module.
- keypairType: is defaulted to `sr25519`, most likely you will never need to change it. `ed25519` is supported too.
for more details, check [client options](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/docs/client_configuration.md)
> Note: The choice of the node is completely up to the user at this point. They need to do the capacity planning. Check [Node Finder](../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria.
Check the document for [capacity planning using code](../javascript/grid3_javascript_capacity_planning.md) if you want to automate it
> Note: this feature is still experimental

View File

@ -0,0 +1,297 @@
<h1>Deploying a VM with QSFS</h1>
<h2>Table of Contents</h2>
- [Prerequisites](#prerequisites)
- [Code Example](#code-example)
- [Detailed Explanation](#detailed-explanation)
- [Getting the Client](#getting-the-client)
- [Preparing QSFS](#preparing-qsfs)
- [Deploying a VM with QSFS](#deploying-a-vm-with-qsfs)
- [Getting the Deployment Information](#getting-the-deployment-information)
- [Deleting a Deployment](#deleting-a-deployment)
***
## Prerequisites
First, make sure that you have your [client](./grid3_javascript_loadclient.md) prepared.
## Code Example
```ts
import { FilterOptions, MachinesModel, QSFSZDBSModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const qsfs_name = "wed2710q1";
const machines_name = "wed2710t1";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsQueryOptions: FilterOptions = {
hru: 6,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 8,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my qsfs test",
metadata: "",
};
const vms: MachinesModel = {
name: machines_name,
network: {
name: "wed2710n1",
ip_range: "10.201.0.0/16",
},
machines: [
{
name: "wed2710v1",
node_id: vmNode,
disks: [
{
name: "wed2710d1",
size: 1,
mountpoint: "/mydisk",
},
],
qsfs_disks: [
{
qsfs_zdbs_name: qsfs_name,
name: "wed2710d2",
minimal_shards: 2,
expected_shards: 4,
encryption_key: "hamada",
prefix: "hamada",
cache: 1,
mountpoint: "/myqsfsdisk",
},
],
public_ip: false,
public_ip6: false,
planetary: true,
cpu: 1,
memory: 1024,
rootfs_size: 0,
flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
entrypoint: "/sbin/zinit init",
env: {
SSH_KEY: config.ssh_key,
},
},
],
metadata: "{'testVMs': true}",
description: "test deploying VMs via ts grid3 client",
};
async function cancel(grid3) {
// delete
const d = await grid3.machines.delete({ name: machines_name });
log(d);
const r = await grid3.qsfs_zdbs.delete({ name: qsfs_name });
log(r);
}
//deploy qsfs
const res = await grid3.qsfs_zdbs.deploy(qsfs);
log(">>>>>>>>>>>>>>>QSFS backend has been created<<<<<<<<<<<<<<<");
log(res);
const vm_res = await grid3.machines.deploy(vms);
log(">>>>>>>>>>>>>>>vm has been created<<<<<<<<<<<<<<<");
log(vm_res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(">>>>>>>>>>>>>>>Deployment result<<<<<<<<<<<<<<<");
log(l);
// await cancel(grid3);
await grid3.disconnect();
}
main();
```
## Detailed Explanation
We present a detailed explanation of the example shown above.
### Getting the Client
```ts
const grid3 = getClient();
```
### Preparing QSFS
```ts
const qsfs_name = "wed2710q1";
const machines_name = "wed2710t1";
```
We prepare here some names to use across the client for the QSFS and the machines project
```ts
const qsfsQueryOptions: FilterOptions = {
hru: 6,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 8,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my qsfs test",
metadata: "",
};
const res = await grid3.qsfs_zdbs.deploy(qsfs);
log(">>>>>>>>>>>>>>>QSFS backend has been created<<<<<<<<<<<<<<<");
log(res);
```
Here we deploy `8` ZDBs on nodes `2,3` with password `mypassword`, all of them having disk size of `10GB`
### Deploying a VM with QSFS
```ts
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
// deploy vms
const vms: MachinesModel = {
name: machines_name,
network: {
name: "wed2710n1",
ip_range: "10.201.0.0/16",
},
machines: [
{
name: "wed2710v1",
node_id: vmNode,
disks: [
{
name: "wed2710d1",
size: 1,
mountpoint: "/mydisk",
},
],
qsfs_disks: [
{
qsfs_zdbs_name: qsfs_name,
name: "wed2710d2",
minimal_shards: 2,
expected_shards: 4,
encryption_key: "hamada",
prefix: "hamada",
cache: 1,
mountpoint: "/myqsfsdisk",
},
],
public_ip: false,
public_ip6: false,
planetary: true,
cpu: 1,
memory: 1024,
rootfs_size: 0,
flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
entrypoint: "/sbin/zinit init",
env: {
SSH_KEY: config.ssh_key,
},
},
],
metadata: "{'testVMs': true}",
description: "test deploying VMs via ts grid3 client",
};
const vm_res = await grid3.machines.deploy(vms);
log(">>>>>>>>>>>>>>>vm has been created<<<<<<<<<<<<<<<");
log(vm_res);
```
So this deployment is almost similiar to what we have in the [vm deployment section](./grid3_javascript_vm.md). We only have a new section `qsfs_disks`
```ts
qsfs_disks: [{
qsfs_zdbs_name: qsfs_name,
name: "wed2710d2",
minimal_shards: 2,
expected_shards: 4,
encryption_key: "hamada",
prefix: "hamada",
cache: 1,
mountpoint: "/myqsfsdisk"
}],
```
`qsfs_disks` is a list, representing all of the QSFS disks used within that VM.
- `qsfs_zdbs_name`: that's the backend ZDBs we defined in the beginning
- `expected_shards`: how many ZDBs that QSFS should be working with
- `minimal_shards`: the minimal possible amount of ZDBs to recover the data with when losing disks e.g due to failure
- `mountpoint`: where it will be mounted on the VM `/myqsfsdisk`
### Getting the Deployment Information
```ts
const l = await grid3.machines.getObj(vms.name);
log(l);
```
### Deleting a Deployment
```ts
// delete
const d = await grid3.machines.delete({ name: machines_name });
log(d);
const r = await grid3.qsfs_zdbs.delete({ name: qsfs_name });
log(r);
```

View File

@ -0,0 +1,142 @@
<h1>Deploying ZDBs for QSFS</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [Getting the client](#getting-the-client)
- [Preparing the nodes](#preparing-the-nodes)
- [Preparing ZDBs](#preparing-zdbs)
- [Deploying the ZDBs](#deploying-the-zdbs)
- [Getting deployment information](#getting-deployment-information)
- [Deleting a deployment](#deleting-a-deployment)
***
## Introduction
We show how to deploy ZDBs for QSFS on the TFGrid with the Javascript client.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
````typescript
import { FilterOptions, QSFSZDBSModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const qsfs_name = "zdbsQsfsDemo";
const qsfsQueryOptions: FilterOptions = {
hru: 8,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 12,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my zdbs test",
metadata: "",
};
const deploy_res = await grid3.qsfs_zdbs.deploy(qsfs);
log(deploy_res);
const zdbs_data = await grid3.qsfs_zdbs.get({ name: qsfs_name });
log(zdbs_data);
await grid3.disconnect();
}
main();
````
## Detailed explanation
### Getting the client
```typescript
const grid3 = getClient();
```
### Preparing the nodes
we need to deploy the zdbs on two different nodes so, we setup the filters here to retrieve the available nodes.
````typescript
const qsfsQueryOptions: FilterOptions = {
hru: 16,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
````
Now we have two nodes in `qsfsNode`.
### Preparing ZDBs
````typescript
const qsfs_name = "zdbsQsfsDemo";
````
We prepare here a name to use across the client for the QSFS ZDBs
### Deploying the ZDBs
````typescript
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 12,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my qsfs test",
metadata: "",
};
const deploy_res = await grid3.qsfs_zdbs.deploy(qsfs);
log(deploy_res);
````
Here we deploy `12` ZDBs on nodes in `qsfsNode` with password `mypassword`, all of them having disk size of `1GB`, the client already add 4 zdbs for metadata.
### Getting deployment information
````typescript
const zdbs_data = await grid3.qsfs_zdbs.get({ name: qsfs_name });
log(zdbs_data);
````
### Deleting a deployment
````typescript
const delete_response = await grid3.qsfs_zdbs.delete({ name: qsfs_name });
log(delete_response);
````

View File

@ -0,0 +1,24 @@
<h1> Javascript Client </h1>
This section covers developing projects on top of Threefold Grid using Javascript language.
Javascript has a huge ecosystem, and first class citizen when it comes to blockchain technologies like substrate and that was one of the reasons for it to become one the very first supported languages on the grid.
Please make sure to check the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) before continuing.
<h2> Table of Contents </h2>
- [Installation](./grid3_javascript_installation.md)
- [Loading Client](./grid3_javascript_loadclient.md)
- [Deploy a VM](./grid3_javascript_vm.md)
- [Capacity Planning](./grid3_javascript_capacity_planning.md)
- [Deploy Multiple VMs](./grid3_javascript_vms.md)
- [Deploy CapRover](./grid3_javascript_caprover.md)
- [Gateways](./grid3_javascript_vm_gateways.md)
- [Deploy a Kubernetes Cluster](./grid3_javascript_kubernetes.md)
- [Deploy a ZDB](./grid3_javascript_zdb.md)
- [Deploy ZDBs for QSFS](./grid3_javascript_qsfs_zdbs.md)
- [QSFS](./grid3_javascript_qsfs.md)
- [Key Value Store](./grid3_javascript_kvstore.md)
- [VM with Wireguard and Gateway](./grid3_wireguard_gateway.md)
- [GPU Support](./grid3_javascript_gpu_support.md)

View File

@ -0,0 +1,15 @@
## How to run the scripts
- Set your grid3 client configuration in `scripts/client_loader.ts` or easily use one of `config.json`
- update your customized deployments specs
- Run using [ts-node](https://www.npmjs.com/ts-node)
```bash
npx ts-node --project tsconfig-node.json scripts/zdb.ts
```
or
```bash
yarn run ts-node --project tsconfig-node.json scripts/zdb.ts
```

View File

@ -0,0 +1,194 @@
<h1> Deploying a VM </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [Detailed Explanation](#detailed-explanation)
- [Building Network](#building-network)
- [Building the Disk Model](#building-the-disk-model)
- [Building the VM](#building-the-vm)
- [Building VMs Collection](#building-vms-collection)
- [deployment](#deployment)
- [Getting Deployment Information](#getting-deployment-information)
- [Deleting a Deployment](#deleting-a-deployment)
***
## Introduction
We present information on how to deploy a VM with the Javascript client with concrete examples.
## Example
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "dynamictest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "dynamicDisk";
disk.size = 8;
disk.mountpoint = "/testdisk";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
country: "Belgium",
};
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 1;
vm.memory = 1024;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm.entrypoint = "/sbin/zinit init";
vm.env = {
SSH_KEY: config.ssh_key,
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "dynamicVMS";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
## Detailed Explanation
### Building Network
```ts
// create network Object
const n = new NetworkModel();
n.name = "dynamictest";
n.ip_range = "10.249.0.0/16";
```
Here we prepare the network model that is going to be used by specifying a name to our network and the range it will be spanning over
## Building the Disk Model
```ts
// create disk Object
const disk = new DiskModel();
disk.name = "dynamicDisk";
disk.size = 8;
disk.mountpoint = "/testdisk";
```
here we create the disk model specifying its name, size in GB and where it will be mounted eventually
## Building the VM
```ts
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 1;
vm.memory = 1024;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm.entrypoint = "/sbin/zinit init";
vm.env = {
SSH_KEY: config.ssh_key,
};
```
Now we go to the VM model, that will be used to build our `zmachine` object
We need to specify its
- name
- node_id: where it will get deployed
- disks: disks model collection
- memory
- root filesystem size
- flist: the image it is going to start from. Check the [supported flists](../flist/grid3_supported_flists.md)
- entry point: entrypoint command / script to execute
- env: has the environment variables needed e.g sshkeys used
- public ip: if we want to have a public ip attached to the VM
- planetary: to enable planetary network on VM
## Building VMs Collection
```ts
// create VMs Object
const vms = new MachinesModel();
vms.name = "dynamicVMS";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
```
Here it's quite simple we can add one or more VM to the `machines` property to have them deployed as part of our project
## deployment
```ts
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
```
## Getting Deployment Information
can do so based on the name you gave to the `vms` collection
```ts
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
```
## Deleting a Deployment
```ts
// delete
const d = await grid3.machines.delete({ name: vms.name });
log(d);
```
In the underlying layer we cancel the contracts that were created on the chain and as a result all of the workloads tied to his project will get deleted.

View File

@ -0,0 +1,189 @@
<h1> Deploying a VM and exposing it over a Gateway Prefix </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [deploying](#deploying)
- [getting deployment object](#getting-deployment-object)
- [deletion](#deletion)
- [Deploying a VM and exposing it over a Gateway using a Full domain](#deploying-a-vm-and-exposing-it-over-a-gateway-using-a-full-domain)
- [Example code](#example-code-1)
- [Detailed explanation](#detailed-explanation-1)
- [deploying](#deploying-1)
- [get deployment object](#get-deployment-object)
- [deletion](#deletion-1)
***
## Introduction
After the [deployment of a VM](./grid3_javascript_vm.md), now it's time to expose it to the world
## Example code
```ts
import { FilterOptions, GatewayNameModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
// read more about the gateway types in this doc: https://github.com/threefoldtech/zos/tree/main/docs/gateway
async function main() {
const grid3 = await getClient();
const gatewayQueryOptions: FilterOptions = {
gateway: true,
farmId: 1,
};
const gw = new GatewayNameModel();
gw.name = "test";
gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId;
gw.tls_passthrough = false;
// the backends have to be in this format `http://ip:port` or `https://ip:port`, and the `ip` pingable from the node so using the ygg ip or public ip if available.
gw.backends = ["http://185.206.122.35:8000"];
// deploy
const res = await grid3.gateway.deploy_name(gw);
log(res);
// get the deployment
const l = await grid3.gateway.getObj(gw.name);
log(l);
// // delete
// const d = await grid3.gateway.delete_name({ name: gw.name });
// log(d);
grid3.disconnect();
}
main();
```
## Detailed explanation
```ts
const gw = new GatewayNameModel();
gw.name = "test";
gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId;
gw.tls_passthrough = false;
gw.backends = ["http://185.206.122.35:8000"];
```
- we created a gateway name model and gave it a `name` -that's why it's called GatewayName- `test` to be deployed on gateway node to end up with a domain `test.gent01.devnet.grid.tf`,
- we create a proxy for the gateway to send the traffic coming to `test.ghent01.devnet.grid.tf` to the backend `http://185.206.122.35`, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replace it with `true` your backend service needs to be able to do the TLS termination
### deploying
```ts
// deploy
const res = await grid3.gateway.deploy_name(gw);
log(res);
```
this deploys `GatewayName` on the grid
### getting deployment object
```ts
const l = await grid3.gateway.getObj(gw.name);
log(l);
```
getting the deployment information can be done using `getObj`
### deletion
```ts
const d = await grid3.gateway.delete_name({ name: gw.name });
log(d);
```
## Deploying a VM and exposing it over a Gateway using a Full domain
After the [deployment of a VM](./grid3_javascript_vm.md), now it's time to expose it to the world
## Example code
```ts
import { FilterOptions, GatewayFQDNModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
// read more about the gateway types in this doc: https://github.com/threefoldtech/zos/tree/main/docs/gateway
async function main() {
const grid3 = await getClient();
const gatewayQueryOptions: FilterOptions = {
gateway: true,
farmId: 1,
};
const gw = new GatewayFQDNModel();
gw.name = "applyFQDN";
gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId;
gw.fqdn = "test.hamada.grid.tf";
gw.tls_passthrough = false;
// the backends have to be in this format `http://ip:port` or `https://ip:port`, and the `ip` pingable from the node so using the ygg ip or public ip if available.
gw.backends = ["http://185.206.122.35:8000"];
// deploy
const res = await grid3.gateway.deploy_fqdn(gw);
log(res);
// get the deployment
const l = await grid3.gateway.getObj(gw.name);
log(l);
// // delete
// const d = await grid3.gateway.delete_fqdn({ name: gw.name });
// log(d);
grid3.disconnect();
}
main();
```
## Detailed explanation
```ts
const gw = new GatewayFQDNModel();
gw.name = "applyFQDN";
gw.node_id = 1;
gw.fqdn = "test.hamada.grid.tf";
gw.tls_passthrough = false;
gw.backends = ["my yggdrasil IP"];
```
- we created a `GatewayFQDNModel` and gave it a name `applyFQDNN` to be deployed on gateway node `1` and specified the fully qualified domain `fqdn` to a domain we own `test.hamada.grid.tf`
- we created a record on our name provider for `test.hamada.grid.tf` to point to the IP of gateway node `1`
- we specified the backened would be an yggdrassil ip so once this is deployed when we go to `test.hamada.grid.tf` we go to the gateway server and from their our traffic goes to the backend.
### deploying
```ts
// deploy
const res = await grid3.gateway.deploy_fqdn(gw);
log(res);
```
this deploys `GatewayName` on the grid
### get deployment object
```ts
const l = await grid3.gateway.getObj(gw.name);
log(l);
```
getting the deployment information can be done using `getObj`
### deletion
```ts
const d = await grid3.gateway.delete_fqdn({ name: gw.name });
log(d);
```

View File

@ -0,0 +1,108 @@
<h1> Deploying multiple VMs</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example code](#example-code)
***
## Introduction
It is possible to deploy multiple VMs with the Javascript client.
## Example code
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "monNetwork";
n.ip_range = "10.238.0.0/16";
// create disk Object
const disk1 = new DiskModel();
disk1.name = "newDisk1";
disk1.size = 1;
disk1.mountpoint = "/newDisk1";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
// create vm node Object
const vm1 = new MachineModel();
vm1.name = "testvm1";
vm1.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
vm1.disks = [disk1];
vm1.public_ip = false;
vm1.planetary = true;
vm1.cpu = 1;
vm1.memory = 1024;
vm1.rootfs_size = 0;
vm1.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm1.entrypoint = "/sbin/zinit init";
vm1.env = {
SSH_KEY: config.ssh_key,
};
// create disk Object
const disk2 = new DiskModel();
disk2.name = "newDisk2";
disk2.size = 1;
disk2.mountpoint = "/newDisk2";
// create another vm node Object
const vm2 = new MachineModel();
vm2.name = "testvm2";
vm2.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[1].nodeId;
vm2.disks = [disk2];
vm2.public_ip = false;
vm2.planetary = true;
vm2.cpu = 1;
vm2.memory = 1024;
vm2.rootfs_size = 0;
vm2.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm2.entrypoint = "/sbin/zinit init";
vm2.env = {
SSH_KEY: config.ssh_key,
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "monVMS";
vms.network = n;
vms.machines = [vm1, vm2];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
It's similiar to the previous section of [deploying a single VM](../javascript/grid3_javascript_vm.md), but just adds more vm objects to vms collection.

View File

@ -0,0 +1,143 @@
<h1>Deploying ZDB</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [Getting the client](#getting-the-client)
- [Building the model](#building-the-model)
- [preparing ZDBs collection](#preparing-zdbs-collection)
- [Deployment](#deployment)
- [Getting Deployment information](#getting-deployment-information)
- [Deleting a deployment](#deleting-a-deployment)
***
## Introduction
We show how to deploy ZDB on the TFGrid with the Javascript client.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
```ts
import { FilterOptions, ZDBModel, ZdbModes, ZDBSModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const zdbQueryOptions: FilterOptions = {
sru: 1,
hru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
// create zdb object
const zdb = new ZDBModel();
zdb.name = "hamada";
zdb.node_id = +(await grid3.capacity.filterNodes(zdbQueryOptions))[0].nodeId;
zdb.mode = ZdbModes.user;
zdb.disk_size = 1;
zdb.publicNamespace = false;
zdb.password = "testzdb";
// create zdbs object
const zdbs = new ZDBSModel();
zdbs.name = "tttzdbs";
zdbs.zdbs = [zdb];
zdbs.metadata = '{"test": "test"}';
// deploy zdb
const res = await grid3.zdbs.deploy(zdbs);
log(res);
// get the deployment
const l = await grid3.zdbs.getObj(zdbs.name);
log(l);
// // delete
// const d = await grid3.zdbs.delete({ name: zdbs.name });
// log(d);
await grid3.disconnect();
}
main();
```
## Detailed explanation
### Getting the client
```ts
const grid3 = getClient();
```
### Building the model
```ts
// create zdb object
const zdb = new ZDBModel();
zdb.name = "hamada";
zdb.node_id = +(await grid3.capacity.filterNodes(zdbQueryOptions))[0].nodeId;
zdb.mode = ZdbModes.user;
zdb.disk_size = 1;
zdb.publicNamespace = false;
zdb.password = "testzdb";
```
Here we define a `ZDB model` and setting the relevant properties e.g
- name
- node_id : where to deploy on
- mode: `user` or `seq`
- disk_size: disk size in GB
- publicNamespace: a public namespace can be read-only if a password is set
- password: namespace password
### preparing ZDBs collection
```ts
// create zdbs object
const zdbs = new ZDBSModel();
zdbs.name = "tttzdbs";
zdbs.zdbs = [zdb];
zdbs.metadata = '{"test": "test"}';
```
you can attach multiple ZDBs into the collection and send it for deployment
### Deployment
```ts
const res = await grid3.zdbs.deploy(zdbs);
log(res);
```
### Getting Deployment information
`getObj` gives detailed information about the workload.
```ts
// get the deployment
const l = await grid3.zdbs.getObj(zdbs.name);
log(l);
```
### Deleting a deployment
`.delete` method helps cancelling the relevant contracts related to that ZDBs deployment
```ts
// delete
const d = await grid3.zdbs.delete({ name: zdbs.name });
log(d);
```

View File

@ -0,0 +1,302 @@
<h1> Deploying a VM with Wireguard and Gateway </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Client Configurations](#client-configurations)
- [Code Example](#code-example)
- [Detailed Explanation](#detailed-explanation)
- [Get the Client](#get-the-client)
- [Get the Nodes](#get-the-nodes)
- [Deploy the VM](#deploy-the-vm)
- [Deploy the Gateway](#deploy-the-gateway)
- [Get the Deployments Information](#get-the-deployments-information)
- [Disconnect the Client](#disconnect-the-client)
- [Delete the Deployments](#delete-the-deployments)
- [Conclusion](#conclusion)
***
## Introduction
We present here the relevant information when it comes to deploying a virtual machine with Wireguard and a gateway.
## Client Configurations
To configure the client, have a look at [this section](./grid3_javascript_loadclient.md).
## Code Example
```ts
import { FilterOptions, GatewayNameModel, GridClient, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
function createNetworkModel(gwNode: number, name: string): NetworkModel {
return {
name,
addAccess: true,
accessNodeId: gwNode,
ip_range: "10.238.0.0/16",
} as NetworkModel;
}
function createMachineModel(node: number) {
return {
name: "testvm1",
node_id: node,
public_ip: false,
planetary: true,
cpu: 1,
memory: 1024 * 2,
rootfs_size: 0,
disks: [],
flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist",
entrypoint: "/usr/bin/python3 -m http.server --bind ::",
env: {
SSH_KEY: config.ssh_key,
},
} as MachineModel;
}
function createMachinesModel(vm: MachineModel, network: NetworkModel): MachinesModel {
return {
name: "newVMs",
network,
machines: [vm],
metadata: "",
description: "test deploying VMs with wireguard via ts grid3 client",
} as MachinesModel;
}
function createGwModel(node_id: number, ip: string, networkName: string, name: string, port: number) {
return {
name,
node_id,
tls_passthrough: false,
backends: [`http://${ip}:${port}`],
network: networkName,
} as GatewayNameModel;
}
async function main() {
const grid3 = await getClient();
const gwNode = +(await grid3.capacity.filterNodes({ gateway: true }))[0].nodeId;
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 2, // GB
availableFor: grid3.twinId,
farmId: 1,
};
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
const network = createNetworkModel(gwNode, "monNetwork");
const vm = createMachineModel(vmNode);
const machines = createMachinesModel(vm, network);
log(`Deploying vm on node: ${vmNode}, with network node: ${gwNode}`);
// deploy the vm
const vmResult = await grid3.machines.deploy(machines);
log(vmResult);
const deployedVm = await grid3.machines.getObj(machines.name);
log("+++ deployed vm +++");
log(deployedVm);
// deploy the gateway
const vmPrivateIP = (deployedVm as { interfaces: { ip: string }[] }[])[0].interfaces[0].ip;
const gateway = createGwModel(gwNode, vmPrivateIP, network.name, "pyserver", 8000);
log(`deploying gateway ${network.name} on node ${gwNode}`);
const gatewayResult = await grid3.gateway.deploy_name(gateway);
log(gatewayResult);
log("+++ Deployed gateway +++");
const deployedGw = await grid3.gateway.getObj(gateway.name);
log(deployedGw);
await grid3.disconnect();
}
main();
```
## Detailed Explanation
What we need to do with that code is: Deploy a name gateway with the wireguard IP as the backend; that allows accessing a server inside the vm through the gateway using the private network (wireguard) as the backend.
This will be done through the following steps:
### Get the Client
```ts
const grid3 = getClient();
```
### Get the Nodes
Determine the deploying nodes for the vm, network and gateway.
- Gateway and network access node
```ts
const gwNode = +(await grid3.capacity.filterNodes({ gateway: true }))[0].nodeId;
```
Using the `filterNodes` method, will get the first gateway node id, we will deploy the gateway and will use it as our network access node.
> The gateway node must be the same as the network access node.
- VM node
we need to set the filter options first for this example we will deploy the vm with 1 cpu, 2 GB of memory.
now will crete a `FilterOptions` object with that specs and get the firs node id of the result.
```ts
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 2, // GB
availableFor: grid3.twinId,
farmId: 1,
};
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
```
### Deploy the VM
We need to create the network and machine models, the deploy the VM
```ts
const network = createNetworkModel(gwNode, "monNetwork");
const vm = createMachineModel(vmNode);
const machines = createMachinesModel(vm, network);
log(`Deploying vm on node: ${vmNode}, with network node: ${gwNode}`);
// deploy the vm
const vmResult = await grid3.machines.deploy(machines);
log(vmResult);
```
- `CreateNetWorkModel` :
we are creating a network and set the node id to be `gwNode`, the name `monNetwork` and inside the function we set `addAccess: true` to add __wireguard__ access.
- `createMachineModel` and `createMachinesModel` is similar to the previous section of [deploying a single VM](../javascript/grid3_javascript_vm.md), but we are passing the created `NetworkModel` to the machines model and the entry point here runs a simple python server.
### Deploy the Gateway
Now we have our VM deployed with it's network, we need to make the gateway on the same node, same network and pointing to the VM's private IP address.
- Get the VM's private IP address:
```ts
const vmPrivateIP = (deployedVm as { interfaces: { ip: string }[] }[])[0].interfaces[0].ip;
```
- Create the Gateway name model:
```ts
const gateway = createGwModel(gwNode, vmPrivateIP, network.name, "pyserver", 8000);
```
This will create a `GatewayNameModel` with the following properties:
- `name` : the subdomain name
- `node_id` : the gateway node id
- `tls_passthrough: false`
- `backends: [`http://${ip}:${port}`]` : the private ip address and the port number of our machine
- `network: networkName` : the network name, we already created earlier.
### Get the Deployments Information
```ts
const deployedVm = await grid3.machines.getObj(machines.name);
log("+++ deployed vm +++");
log(deployedVm);
log("+++ Deployed gateway +++");
const deployedGw = await grid3.gateway.getObj(gateway.name);
log(deployedGw);
```
- `deployedVm` : is an array of one object contains the details about the vm deployment.
```ts
[
{
version: 0,
contractId: 30658,
nodeId: 11,
name: 'testvm1',
created: 1686225126,
status: 'ok',
message: '',
flist: 'https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist',
publicIP: null,
planetary: '302:9e63:7d43:b742:3582:a831:cd41:3f19',
interfaces: [ { network: 'monNetwork', ip: '10.238.2.2' } ],
capacity: { cpu: 1, memory: 2048 },
mounts: [],
env: {
SSH_KEY: 'ssh'
},
entrypoint: '/usr/bin/python3 -m http.server --bind ::',
metadata: '{"type":"vm","name":"newVMs","projectName":""}',
description: 'test deploying VMs with wireguard via ts grid3 client',
rootfs_size: 0,
corex: false
}
]
```
- `deployedGw` : is an array of one object contains the details of the gateway name.
```ts
[
{
version: 0,
contractId: 30659,
name: 'pyserver1',
created: 1686225139,
status: 'ok',
message: '',
type: 'gateway-name-proxy',
domain: 'pyserver1.gent02.dev.grid.tf',
tls_passthrough: false,
backends: [ 'http://10.238.2.2:8000' ],
metadata: '{"type":"gateway","name":"pyserver1","projectName":""}',
description: ''
}
]
```
Now we can access the vm using the `domain` that returned in the object.
### Disconnect the Client
finally we need to disconnect the client using `await grid3.disconnect();`
### Delete the Deployments
If we want to delete the deployments we can just do this:
```ts
const deletedMachines = await grid3.machines.delete({ name: machines.name});
log(deletedMachines);
const deletedGW = await grid3.gateway.delete_name({ name: gateway.name});
log(deletedGW);
```
## Conclusion
This section presented a detailed description on how to create a virtual machine with private IP using Wireguard and use it as a backend for a name gateway.
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@ -0,0 +1,11 @@
- [Installation](@grid3_javascript_installation)
- [Loading client](@grid3_javascript_loadclient)
- [Deploy a VM](@grid3_javascript_vm)
- [Capacity planning](@grid3_javascript_capacity_planning)
- [Deploy multiple VMs](@grid3_javascript_vms)
- [Deploy CapRover](@grid3_javascript_caprover)
- [Gateways](@grid3_javascript_vm_gateways)
- [Deploy a Kubernetes cluster](@grid3_javascript_kubernetes)
- [Deploy a ZDB](@grid3_javascript_zdb)
- [QSFS](@grid3_javascript_qsfs)
- [Key Value Store](@grid3_javascript_kvstore)