...
20
tosort/technology/architecture/cloud_architecture_.md
Normal file
@@ -0,0 +1,20 @@
|
||||

|
||||
|
||||
## Architecture Overview
|
||||
|
||||
**More info:**
|
||||
|
||||
- QSSS
|
||||
- QSFS
|
||||
- [Quantum Safe Network Concept](sdk:archi_qsnetwork)
|
||||
- [Zero-OS Network](sdk:capacity_network)
|
||||
- [ThreeFold Network = Planetary Network](sdk:archi_psnw)
|
||||
- [Web Gateway](sdk:archi_webgateway)
|
||||
- TFGrid
|
||||
- [3Node](3node)
|
||||
- [ThreeFold Connect](tfconnect)
|
||||
|
||||
<!--
|
||||
These are outdated, need to change links
|
||||
- Payments - AutoPay twinautopay - no links found
|
||||
TFGrid Walletcloud_wallet no link found-->
|
43
tosort/technology/architecture/cloud_wallet_.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Wallet on Stellar Network
|
||||
|
||||

|
||||
|
||||
### Prepaid Wallets
|
||||
|
||||
The VDC has a built-in __prepaid wallet__, which is the wallet used for paying the capacity requested in the VDC. This wallet expresses in TFT the remaining balance available for ensuring the operational continuity of the reserved capacity.
|
||||
|
||||
This wallet is registered on the Stellar network, and is exclusively used for capacity reservation aligned with the chosen VDC size.
|
||||
Both the TFGrid testnet and mainnet are connected to the Stellar mainnet, so TFTs used are the same. Testnet prices are substantially lower than mainnet prices, though there's no guarantee about continuity of operation: testnet is reset at regular times, and available capacity is also lower than on mainnet.
|
||||
|
||||
### A public key and a shared private key
|
||||
|
||||
The wallet is characterized by 2 strings:
|
||||
- A public address, starting with a 'G', is the address that can be shared with anyone, as it the address to be mentioned when transferring tokens TO the wallet.
|
||||
- A private key, starting with an 'S', is the secret that gives control over the wallet, and which is needed to generate outgoing transfers.
|
||||
|
||||
### Payment for Capacity Process
|
||||
|
||||
The Prepaid Wallet which is setup within your VDC is exclusively used for this purpose. The private key of this wallet is shared between you and the VDC provider :
|
||||
- The VDC provider needs the private key to pay the farmer on a rolling basis : every hour an amount is transferred to the farmer(s) that owns the reserved hardware capacity, so it stays reserved for the next 2 weeks. These 2 weeks are meant as a 'grace period' : when the balance of the prepaid wallet becomes zero, you have 2 weeks to top up the wallet. You will get notified for this, while the workload remains operational.
|
||||
In case after these 2 weeks grace period the wallet hasn't been topped up again, the workload will be removed and the capacity will be made available again for new reservations.
|
||||
|
||||
## Top-up a Wallet
|
||||
|
||||
Please read the [Top-up](evdc_wallet_topup) page for instructions.
|
||||
|
||||
## Viewing Your Balance
|
||||
|
||||
Simply click on one of your existing wallet see the details of the wallet.
|
||||
|
||||

|
||||
|
||||
## Withdraw TFTs from the wallet
|
||||
|
||||
The private key is available to transfer tokens from the prepaid wallet to your personal TFT wallet. Evidently, transferring tokens has a direct impact on the expiration date of your VDC.
|
||||
|
||||
### Your VDC Wallet Details
|
||||
|
||||
- The Network is the Stellar mainnet network (indicated with `STD` on the wallet information)
|
||||
- [Trustlines](https://www.stellar.org/developers/guides/concepts/assets.html) are specific to the Stellar network to indicate that a user is 'trusting' the asset / crypto issuer, in our case trusting ThreeFold Dubai as issuer of TFT.
|
||||
Trustlines are specific to the network, so it needs to be established both on testnet and mainnet and for all the tokens that someone intends to hold. Without a trustline, a wallet address can't be fed with tokens.
|
||||
In order to make it easier for the user, trustlines are being established automatically when creating a wallet for TFT in the admin panel as well as in ThreeFold Connect app. However, if you use a third party Stellar wallet for your tokens, you need to create the trustlines yourself.
|
79
tosort/technology/architecture/evdc_qsfs_get_started_.md
Normal file
@@ -0,0 +1,79 @@
|
||||
## Getting started
|
||||
|
||||
Any Quantum-Safe File System has 4 storage layers :
|
||||
- An etcd metadata storage layer
|
||||
- Local storage
|
||||
- ZDB-FS fuse layer
|
||||
- ZSTOR for the dispersed storage
|
||||
|
||||
Now, there are 2 ways to run the zstor filesystem:
|
||||
- In self-management mode for the metadata;
|
||||
- A 'Quantum Storage Enabled' mode.
|
||||
|
||||
The first mode combines the local storage, ZDB-FS and ZSTOR, but requires an etcd metadata layer to be manually run and managed.
|
||||
The second mode is enabled by the `ENABLE QUANTUM STORAGE` button and provisions etcd to manage the metadata. Here the 4 layers are available (hence it will consume slightly more storage from your VDC).
|
||||
|
||||
### Manually Managed Metadata Mode
|
||||
|
||||
This Planetary Secure File System uses a ThreeFold VDC's storage nodes to back data up to the ThreeFold Grid. Below you'll find instructions for using an executable bootstrap file that runs on Linux or in a Linux Docker container to set up the complete environment necessary to use the file system.
|
||||
|
||||
Please note that this solution is currently for testing only, and some important features are still under development.
|
||||
|
||||
#### VDC and Z-Stor Config
|
||||
|
||||
If you haven't already, go ahead and [deploy a VDC](evdc_deploy). Then download the Z-Stor config file, found in the upper right corner of the `VDC Storage Nodes` screen. Unless you know that IPv6 works on your machine and within Docker, choose the IPv4 version of the file.
|
||||
|
||||

|
||||
|
||||
As described in [Manage Storage Nodes](evdc_storage), this file contains the necessary information to connect with the 0-DBs running on the storage nodes in your VDC. It also includes an encryption key used to encrypt data that's uploaded and a field to specify your etcd endpoints. Using the defaults here is fine.
|
||||
|
||||
#### Bootstrap Executable
|
||||
|
||||
Download now the zstor filesystem bootstrap, available [here](https://github.com/threefoldtech/quantum-storage/releases/download/v0.0.1/planetaryfs-bootstrap-linux-amd64).
|
||||
|
||||
|
||||
> __Remark__:
|
||||
For now, the bootstrap executable is only available for Linux. We'll cover how to use it within an Ubuntu container in Docker, which will also work on MacOS.
|
||||
First, we'll start an Ubuntu container with Docker, enabling fuse file system capabilities. In a terminal window,
|
||||
|
||||
`docker run -it --name zdbfs --cap-add SYS_ADMIN --device /dev/fuse ubuntu:20.04`
|
||||
|
||||
Next, we'll copy the Z-Stor config file and the bootstrap executable into the running container. In a separate terminal window, navigate to where you downloaded the files and run:
|
||||
|
||||
`docker cp planetaryfs-bootstrap-linux-amd64 zdbfs:/root/`
|
||||
`docker cp <yourzstorconfig.toml> zdbfs:/root/`
|
||||
|
||||
Back in the container's terminal window, `cd /root` and confirm that the two files are there with `ls`. Then run the bootstrap executable, specifying your config file:
|
||||
|
||||
`chmod u+x planetaryfs-bootstrap-linux-amd64`
|
||||
`./planetaryfs-bootstrap-linux-amd64 <yourzstorconfig.toml>`
|
||||
|
||||
This bootstrap's execution will start up all necessary components and show you that the back-end is ready for dispersing the data.
|
||||
|
||||

|
||||
|
||||
After that, your Planetary Secure File System will be mounted at `/root/.threefold/mnt/zdbfs`. Files copied there will automatically be stored on the grid incrementally as fragments of a certain size are filled, by default 32Mb. In a future release, this will no longer be a limitation.
|
||||
|
||||
### Provisioned Metadata Mode
|
||||
|
||||
Users that intend to have also the metadata out-of-the-box available, and have it used in the Kubernetes cluster, need to push the `ENABLE QUANTUM STORAGE` button. This will allow to use etcd key-value stores in the VDC, and can be used within a Kubernetes cluster.
|
||||
|
||||

|
||||
|
||||
Once Quantum Storage mode is enabled, you get an etcd for free.
|
||||
|
||||
**Remark**: this action can't be undone in your VDC : the etcd stores can be filled immediately, and deletion of them could result in data loss. This is why the 'Disable Quantum Storage' is considered as too risky and is not available.
|
||||
|
||||
### Add node
|
||||
|
||||
Adding storage nodes manually is simple: press the `+ ADD NODE` button.
|
||||
|
||||

|
||||
|
||||
You'll be asked to deploy this storage node either on the same farm or on another one. The choice is a balance between security (have the data in multiple locations makes it more resilient against disaster).
|
||||
|
||||

|
||||
|
||||
If you choose `Yes`, select the farm of your choice, and then pay for the extra capacity.
|
||||
|
||||

|
BIN
tosort/technology/architecture/img/3bot_wallet_detail.jpg
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
tosort/technology/architecture/img/3layers_tf_.jpg
Normal file
After Width: | Height: | Size: 247 KiB |
BIN
tosort/technology/architecture/img/architecture_why_us.jpg
Normal file
After Width: | Height: | Size: 222 KiB |
BIN
tosort/technology/architecture/img/planet_fs.jpg
Normal file
After Width: | Height: | Size: 170 KiB |
BIN
tosort/technology/architecture/img/planetaryfs_add_node.jpg
Normal file
After Width: | Height: | Size: 101 KiB |
After Width: | Height: | Size: 20 KiB |
BIN
tosort/technology/architecture/img/planetaryfs_enable_qs.jpg
Normal file
After Width: | Height: | Size: 102 KiB |
BIN
tosort/technology/architecture/img/planetaryfs_farm.jpg
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
tosort/technology/architecture/img/planetaryfs_pay.jpg
Normal file
After Width: | Height: | Size: 102 KiB |
BIN
tosort/technology/architecture/img/planetaryfs_zstor_config.jpg
Normal file
After Width: | Height: | Size: 102 KiB |
BIN
tosort/technology/architecture/img/quantum_safe_storage.jpg
Normal file
After Width: | Height: | Size: 137 KiB |
After Width: | Height: | Size: 293 KiB |
34
tosort/technology/architecture/threefold_filesystem.md
Normal file
@@ -0,0 +1,34 @@
|
||||

|
||||
|
||||
# ThreeFold zstor filesystem (zstor)
|
||||
|
||||
Part of the eVDC is a set of Storage Nodes, which can be used as a storage infrastructure for files in any format.
|
||||
|
||||
## Mount Any Files in your Storage Infrastructure
|
||||
|
||||
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum-secure way.
|
||||
|
||||
This storage layer relies on relies on 3 primitives of the ThreeFold technology :
|
||||
|
||||
- [0-db](https://github.com/threefoldtech/0-db) is the storage engine.
|
||||
It is an always append database, which stores objects in an immutable format. It allows keeping the history out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
|
||||
|
||||
- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations.
|
||||
It takes files in any format as input, encrypts this file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result
|
||||
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It's an essential element of the operational backup.
|
||||
|
||||
- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace.
|
||||
|
||||
Together they form a storage layer that is quantum secure: even the most powerful computer can't hack the system because no single node contains all of the information needed to reconstruct the data.
|
||||
|
||||

|
||||
|
||||
This concept scales forever, and you can bring any file system on top of it:
|
||||
- S3 storage
|
||||
- any backup system
|
||||
- an ftp-server
|
||||
- IPFS and Hypercore distributed file sharing protocols
|
||||
- ...
|
||||
|
||||

|
||||
|
19
tosort/technology/consensus3_mechanism/consensus3.md
Normal file
@@ -0,0 +1,19 @@
|
||||

|
||||
|
||||
# DAO Consensus Engine
|
||||
|
||||
!!!include:dao_info
|
||||
|
||||
## DAO Engine
|
||||
|
||||
On TFGrid 3.0 ThreeFold has implemented a DAO consensus engine using Polkadot/Substrate blockchain technology.
|
||||
|
||||
This is a powerful blockchain construct which allows us to run our TFGrid and maintain consensus on global scale.
|
||||
|
||||
This system has been designed to be compatible with multiple blockchains.
|
||||
|
||||
!!!include:consensus3_overview_graph
|
||||
|
||||
!!!include:consensus3_toc
|
||||
|
||||
!!!def alias:consensus3,consensus_engine
|
@@ -0,0 +1,17 @@
|
||||

|
||||
|
||||
### consensus engine in relation to TFT Farming Rewards in TFGrid 3.0
|
||||
|
||||
!!!include:consensus3_overview_graph
|
||||
|
||||
The consensus engine checks the farming rules as defined in
|
||||
|
||||
- [farming logic 3.0](farming_reward)
|
||||
- [farming reward calculator](farming_calculator)
|
||||
|
||||
- if uptime + 98% per month then the TFT will be rewarded to the farmer (for TFGrid 3.0, can change later).
|
||||
|
||||
All the data of the farmer and the 3nodes are registered on TFChain
|
||||
|
||||
|
||||
!!!include:consensus3_toc
|
44
tosort/technology/consensus3_mechanism/consensus3_oracles.md
Normal file
@@ -0,0 +1,44 @@
|
||||
|
||||
## Consensus 3.X Oracles used
|
||||
|
||||
Oracles are external resources of information.
|
||||
|
||||
The TFChain captures and holds that information so we get more certainty about the accuracy.
|
||||
|
||||
We have oracles for price & reputation for e.g. TF Farmers and 3Nodes.
|
||||
|
||||
These oracles are implemented on TF_CHAIN for TFGrid 3.0.
|
||||
|
||||
```mermaid
|
||||
|
||||
|
||||
graph TB
|
||||
subgraph Digital Currency Ecosystem
|
||||
money_blockchain[Money Blockchain Explorers]
|
||||
Exch1[Money Blockchain Decentralized Exchange]
|
||||
OracleEngine --> Exch1[Polkadot]
|
||||
OracleEngine --> Exch1[Money Blockchain Exchange]
|
||||
OracleEngine --> Exch2[Binance Exchange]
|
||||
OracleEngine --> Exch3[other... exchanges]
|
||||
end
|
||||
subgraph ThreeFold Grid
|
||||
Monitor_Engine --> 3Node1
|
||||
Monitor_Engine --> 3Node2
|
||||
Monitor_Engine --> 3Node3
|
||||
end
|
||||
subgraph TFChainNode1[TFGrid Blockchain Node]
|
||||
Monitor_Engine
|
||||
Explorers[TFChain Explorers]-->TFGridDB --> BCNode
|
||||
Explorers --> BCNode
|
||||
ConsensusEngine1-->BCNode[Blockchain Validator Node]
|
||||
ConsensusEngine1 --> money_blockchain[Money Blockchain]
|
||||
ConsensusEngine1 --> ReputationEngine[Reputation Engine]
|
||||
ReputationEngine --> Monitor_Engine[Monitor Engine]
|
||||
ConsensusEngine1 --> OracleEngine[Oracle For Pricing Digital Currencies]
|
||||
end
|
||||
|
||||
```
|
||||
|
||||
> TODO: outdated info
|
||||
|
||||
!!!include:consensus3_toc
|
@@ -0,0 +1,51 @@
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph Money Blockchain
|
||||
money_blockchain --> account1
|
||||
money_blockchain --> account2
|
||||
money_blockchain --> account3
|
||||
click money_blockchain "/threefold/#money_blockchain"
|
||||
end
|
||||
subgraph TFChainNode1[TFChain BCNode]
|
||||
Explorer1-->BCNode1
|
||||
ConsensusEngine1-->BCNode1
|
||||
ConsensusEngine1 --> money_blockchain
|
||||
ConsensusEngine1 --> ReputationEngine1
|
||||
ReputationEngine1 --> Monitor_Engine1
|
||||
click ReputationEngine1 "/info/threefold/#reputationengine"
|
||||
click ConsensusEngine1 "/info/threefold/#consensusengine"
|
||||
click BCNode1 "/info/threefold/#bcnode"
|
||||
click Explorer1 "/info/threefold/#tfexplorer"
|
||||
end
|
||||
subgraph TFChainNode2[TFChain BCNode]
|
||||
Explorer2-->BCNode2
|
||||
ConsensusEngine2-->BCNode2
|
||||
ConsensusEngine2 --> money_blockchain
|
||||
ConsensusEngine2 --> ReputationEngine2
|
||||
ReputationEngine2 --> Monitor_Engine2
|
||||
click ReputationEngine2 "/info/threefold/#reputationengine"
|
||||
click ConsensusEngine2 "/info/threefold/#consensusengine"
|
||||
click BCNoBCNode2de1 "/info/threefold/#bcnode"
|
||||
click Explorer2 "/info/threefold/#tfexplorer"
|
||||
|
||||
end
|
||||
Monitor_Engine1 --> 3Node1
|
||||
Monitor_Engine1 --> 3Node2
|
||||
Monitor_Engine1 --> 3Node3
|
||||
Monitor_Engine2 --> 3Node1
|
||||
Monitor_Engine2 --> 3Node2
|
||||
Monitor_Engine2 --> 3Node3
|
||||
click 3Node1 "/info/threefold/#3node"
|
||||
click 3Node2 "/info/threefold/#3node"
|
||||
click 3Node3 "/info/threefold/#3node"
|
||||
click Monitor_Engine1 "/info/threefold/#monitorengine"
|
||||
click Monitor_Engine2 "/info/threefold/#monitorengine"
|
||||
|
||||
|
||||
```
|
||||
|
||||
*click on the parts of the image, they will go to more info*
|
||||
|
||||
> TODO: outdated info
|
||||
|
@@ -0,0 +1,45 @@
|
||||
# Consensus Mechanism
|
||||
|
||||
## Blockchain node components
|
||||
|
||||
!!!include:consensus3_overview_graph
|
||||
|
||||
- A Blockchain node (= Substrate node) called TF-Chain, containing all entities interacting with each other on the TF-Grid
|
||||
- An explorer = a Rest + GraphQL interface to TF-Chain (Graphql is a nice query language to make it easy for everyone to query for info)
|
||||
- Consensus Engine
|
||||
- is a Multisignature Engine running on TF-Chain
|
||||
- The multisignature is done for the Money BlockchainAccounts
|
||||
- It checks the AccountMetadata versus reality and if ok, will sign, which allows transactions to happen after validation of the "smart contract"
|
||||
- SLA & reputation engine
|
||||
- Each node uptime is being checked by Monitor_Engine
|
||||
- Also bandwidth will be checked in the future (starting 3.x)
|
||||
|
||||
### Remarks
|
||||
|
||||
- Each Monitor_Engine checks uptime of X nr of nodes (in beginning it can do all nodes), and stores the info in local DB (to keep history of check)
|
||||
|
||||
## Principle
|
||||
|
||||
- We keep things as simple as we can
|
||||
- Money Blockchain blockchain used to hold the money
|
||||
- Money Blockchain has all required features to allow users to manage their money like wallet support, decentralized exchange, good reporting, low transaction fees, ...
|
||||
- Substrate based TFChain is holding the metadata for the accounts which express what we need to know per account to allow the start contracts to execute.
|
||||
- Smart Contracts are implemented using multisignature feature on Money Blockchain in combination with Multi Signature done by Consensus_Engine.
|
||||
- on money_blockchain:
|
||||
- each user has Money BlockchainAccounts (each of them holds money)
|
||||
- there are normal Accounts (means people can freely transfer money from these accounts) as well as RestrictedAccounts. Money cannot be transfered out of RestrictedAccounts unless consensus has been achieved from ConsensusEngine.
|
||||
- Restricted_Account
|
||||
- On stellar we use the multisignature feature to make sure that locked/vesting or FarmingPool cannot transfer money unless consensus is achieved by the ConsensusEngine
|
||||
|
||||
- Each account on money_blockchain (Money BlockchainAccount) has account record in TFChain who needs advanced features like:
|
||||
- lockup
|
||||
- vesting
|
||||
- minting (rewards to farmers)
|
||||
- tfta to tft conversion
|
||||
|
||||
- The Account record in TFGrid_DB is called AccountMetadata.
|
||||
- The AccountMetadata describes all info required to be able for consensus engine to define what to do for advanced features like vesting, locking, ...
|
||||
|
||||
> TODO: outdated info
|
||||
|
||||
!!!include:consensus3_toc
|
13
tosort/technology/consensus3_mechanism/consensus3_toc.md
Normal file
@@ -0,0 +1,13 @@
|
||||
|
||||
## Consensus Engine Information
|
||||
|
||||
- [Consensus Engine Homepage](consensus3)
|
||||
- [Principles TFChain 3.0 Consensus](consensus3_principles)
|
||||
- [Consensus Engine Farming 3.0](consensus3_engine_farming)
|
||||
- [TFGrid 3.0 wallets](tfgrid3_wallets)
|
||||
- Architecture:
|
||||
- [Money Blockchains/Substrate architecture](money_blockchain_partity_link)
|
||||
<!-- - [Consensus Engine Weight System](consensus3_weights) -->
|
||||
|
||||
> implemented in TFGrid 3.0
|
||||
|
BIN
tosort/technology/consensus3_mechanism/img/grid_header.jpg
Normal file
After Width: | Height: | Size: 39 KiB |
BIN
tosort/technology/consensus3_mechanism/img/limitedsupply_.png
Normal file
After Width: | Height: | Size: 64 KiB |
@@ -0,0 +1,53 @@
|
||||
|
||||
## Link between different Money Blockchain & TFChain
|
||||
|
||||
TF-Chain is the ThreeFold blockchain infrastructure, set up in the Substrate framework.
|
||||
|
||||
We are building a consensus layer which allows us to easily bridge between different money blockchains.
|
||||
|
||||
Main blockchain for TFT remains the Stellar network for now. A secure bridging mechanism exists, able to transfer TFT between the different blockchains.
|
||||
Active bridges as from TFGrid 3.0 release:
|
||||
- Stellar <> Binance Smart Chain
|
||||
- Stellar <> Parity Substrate
|
||||
More bridges are under development.
|
||||
|
||||
```mermaid
|
||||
|
||||
|
||||
graph TB
|
||||
subgraph Money Blockchain
|
||||
money_blockchain --- account1a
|
||||
money_blockchain --- account2a
|
||||
money_blockchain --- account3a
|
||||
account1a --> money_user_1
|
||||
account2a --> money_user_2
|
||||
account3a --> money_user_3
|
||||
click money_blockchain "/info/threefold/#money_blockchain"
|
||||
end
|
||||
subgraph ThreeFold Blockchain On Parity
|
||||
TFBlockchain --- account1b[account 1]
|
||||
TFBlockchain --- account2b[account 2]
|
||||
TFBlockchain --- account3b[account 3]
|
||||
account1b --- smart_contract_data_1
|
||||
account2b --- smart_contract_data_2
|
||||
account3b --- smart_contract_data_3
|
||||
click TFBlockchain "/info/threefold/#tfchain"
|
||||
end
|
||||
account1b ---- account1a[account 1]
|
||||
account2b ---- account2a[account 2]
|
||||
account3b ---- account3a[account 3]
|
||||
|
||||
consensus_engine --> smart_contract_data_1[fa:fa-ban smart contract metadata]
|
||||
consensus_engine --> smart_contract_data_2[fa:fa-ban smart contract metadata ]
|
||||
consensus_engine --> smart_contract_data_3[fa:fa-ban smart contract metadata]
|
||||
consensus_engine --> account1a
|
||||
consensus_engine --> account2a
|
||||
consensus_engine --> account3a
|
||||
click consensus_engine "/info/threefold/#consensus_engine"
|
||||
|
||||
|
||||
```
|
||||
|
||||
Above diagram shows how our consensus engine can deal with Substrate and multiple Money Blockchains at same time.
|
||||
|
||||
!!!include:consensus3_toc
|
52
tosort/technology/consensus3_mechanism/roadmap_tfchain3.md
Normal file
@@ -0,0 +1,52 @@
|
||||
|
||||
# Roadmap For our TFCHain and ThreeFold DAO
|
||||
|
||||

|
||||
|
||||
## TFChain / DAO 3.0.2
|
||||
|
||||
For this phase our TFChain and TFDAO has been implemented using parity/substrate.
|
||||
|
||||
Features
|
||||
|
||||
- poc
|
||||
- pou
|
||||
- identity management
|
||||
- consensus for upgrades of DAO and TFChain (code)
|
||||
- capacity tracking (how much capacity used)
|
||||
- uptime achieved
|
||||
- capacity utization
|
||||
- smart contract for IT
|
||||
- storage of value = TFT
|
||||
- request/approval for adding a validator
|
||||
|
||||
Basically all basic DAO concepts are in place
|
||||
|
||||
## TFChain / DAO 3.0.x
|
||||
|
||||
TBD version nr, planned Q1 2022
|
||||
|
||||
NEW
|
||||
|
||||
- proposals for TFChain/DAO/TFGrid changes (request for change) = we call them TFCRP (ThreeFold Change Request Proposal)
|
||||
- voting on proposals = we call them TFCRV (ThreeFold Change Request Vote)
|
||||
|
||||
|
||||
## TFChain / DAO 3.1.x
|
||||
|
||||
TBD version nr, planned Q1 2022
|
||||
|
||||
This version adds more layers to our existing DAO and prepares for an even more scalable future.
|
||||
|
||||
NEW
|
||||
|
||||
- Cosmos based chain on L2
|
||||
- Validator Nodes for TFGrid and TFChain.
|
||||
- Cosmos based HUB = security for all TFChains
|
||||
|
||||
> More info about our DAO strategy see TFDAO.
|
||||
|
||||
|
||||
|
||||
!!!def alias:tfchain_roadmap,dao_roadmap,tfdao_roadmap
|
||||
|
73
tosort/technology/consensus3_mechanism/tfgrid3_wallets.md
Normal file
@@ -0,0 +1,73 @@
|
||||
|
||||
# TFGrid 3.0 Wallets
|
||||
|
||||
ThreeFold has a mobile wallet which will allow to be used on the TFChain backend (Substrate) as well as any other Money Blockchain it supports.
|
||||
|
||||
This provides for a very secure digital currency infrastructure with lots of advantages.
|
||||
|
||||
- [X] ultra flexible smart contracts possible
|
||||
- [X] super safe
|
||||
- [X] compatible with multiple blockchains (money blockchains)
|
||||
- [X] ultra scalable
|
||||
|
||||
```mermaid
|
||||
|
||||
|
||||
graph TB
|
||||
|
||||
subgraph Money Blockchain
|
||||
money_blockchain[Money Blockchain Explorers]
|
||||
money_blockchain --- money_blockchain_node_1 & money_blockchain_node_2
|
||||
money_blockchain_node_1
|
||||
money_blockchain_node_2
|
||||
end
|
||||
|
||||
subgraph ThreeFold Wallets
|
||||
mobile_wallet[Mobile Wallet]
|
||||
desktop_wallet[Desktop Wallet]
|
||||
mobile_wallet & desktop_wallet --> money_blockchain
|
||||
mobile_wallet & desktop_wallet --> Explorers
|
||||
money_blockchain_wallet[Any Money Blockchain Wallet] --> money_blockchain
|
||||
end
|
||||
|
||||
|
||||
subgraph TFChain[TFGrid Blockchain on Substrate]
|
||||
Explorers[TFChain Explorers]-->TFGridDB --> BCNode
|
||||
Explorers --> BCNode
|
||||
end
|
||||
|
||||
|
||||
```
|
||||
|
||||
Generic overview:
|
||||
|
||||
```mermaid
|
||||
|
||||
graph TB
|
||||
|
||||
subgraph TFChain[TFGrid Chain]
|
||||
guardian1[TFChain Node 1]
|
||||
guardian2[TFChain Node 2]
|
||||
guardian3[TFChain Node 3...9]
|
||||
end
|
||||
|
||||
User_wallet[User Wallet] --> money_blockchain_account
|
||||
User_wallet[User Wallet] --> money_blockchain_restricted_account
|
||||
|
||||
subgraph Money Blockchain Ecosystem
|
||||
money_blockchain_account
|
||||
money_blockchain_restricted_account --- guardian1 & guardian2 & guardian3
|
||||
end
|
||||
|
||||
subgraph consensus[Consensus Layer on Substrate]
|
||||
guardian1 --> ReputationEngine & PricingOracle
|
||||
guardian1 --> contract1[Smart Contract Vesting]
|
||||
guardian1 --> contract2[Smart Contract Minting/Farming]
|
||||
end
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
!!!include:consensus3_toc
|
52
tosort/technology/consensus3_mechanism/tfgrid_db_models.v
Normal file
@@ -0,0 +1,52 @@
|
||||
|
||||
// - vesting
|
||||
// - startdate: epoch
|
||||
// - currency: USD
|
||||
// - [[$month_nr,$minprice_unlock,$TFT_to_vest],...]
|
||||
// - if 48 months then list will have 48 parts
|
||||
// - month 0 = first month
|
||||
// - e.g. [[0,0.11,10000],[1,0.12,10000],[2,0.13,10000],[3,0.14,10000]...]
|
||||
|
||||
//information stored at account level in TFGridDB
|
||||
struct AccountMeta{
|
||||
//corresponds to unique address on money_blockchain
|
||||
money_blockchain_address string
|
||||
vesting Vesting[]
|
||||
unlocked_TFT int
|
||||
}
|
||||
|
||||
struct Vesting{
|
||||
startdate int
|
||||
//which currency is used to execute on the acceleration in the vesting
|
||||
//if price above certain level (which is currency + amount of that currency) the auto unlock
|
||||
currency CurrencyEnum
|
||||
months []VestingMonth
|
||||
}
|
||||
|
||||
struct VestingMonth{
|
||||
month_nr int
|
||||
//if 0 then will not unlock based on price
|
||||
unlock_price f32
|
||||
tft_amount int
|
||||
}
|
||||
|
||||
enum CurrencyEnum{
|
||||
usd
|
||||
eur
|
||||
egp
|
||||
gbp
|
||||
aed
|
||||
}
|
||||
|
||||
//this is stored in the TFGridDB
|
||||
fn (mut v AccountMeta) serialize() string{
|
||||
//todo code which does serialization see above
|
||||
return ""
|
||||
}
|
||||
|
||||
|
||||
//write minting pool
|
||||
|
||||
|
||||
//REMARKS
|
||||
// if unlock triggered because of month or price then that record in the VestingMonth[] goes away and TFT go to unlocked_TFT
|
BIN
tosort/technology/img/layer0_.jpg
Normal file
After Width: | Height: | Size: 290 KiB |
BIN
tosort/technology/img/tech_architecture1.jpg
Normal file
After Width: | Height: | Size: 241 KiB |
BIN
tosort/technology/img/tech_header.jpg
Normal file
After Width: | Height: | Size: 94 KiB |
BIN
tosort/technology/img/technology_home_.jpg
Normal file
After Width: | Height: | Size: 128 KiB |
12
tosort/technology/layers/autonomous_layer_intro.md
Normal file
@@ -0,0 +1,12 @@
|
||||
## Autonomous Layer
|
||||
|
||||
### Digital Twin
|
||||
|
||||
>TODO:
|
||||
|
||||
### 3Bot
|
||||
|
||||
3Bot is a virtual system administrator that manages the user's IT workloads under a private key. This ensures an immutable record of any workload as well as a self-healing functionality to restore these workloads if/when needed. Also, all 3Bot IDs are registered on a modern type of phone book that uses blockchain technology. This phone book, also referred to as the Threefold Grid Blockchain, allows all 3Bots to find each other, connect and exchange information or resources in a fully end-to-end encrypted way. Here as well, there are "zero people involved, as 3Bots operate autonomously in the network, and only under the user's commands.
|
||||
|
||||
3Bot is equipped with a cryptographic 2-factor authentication mechanism. You can log in to your 3Bot via the ThreeFold Connect app on your device which contains your private key. The 3Bot is a very powerful tool that allows you to automate & manage thousands of virtual workloads on the ThreeFold_Grid.
|
||||
|
54
tosort/technology/layers/capacity_layer_intro.md
Normal file
@@ -0,0 +1,54 @@
|
||||
## Capacity Layer
|
||||
|
||||
### Zero-OS
|
||||
|
||||
ThreeFold has build its own operating system called, Zero-OS which was based starting from a Linux Kernel with as purpose to remove all the unnecessary complexity found on contemporary OS's.
|
||||
|
||||
Zero-OS supports a small number of primitives, and performs low-level functions natively.
|
||||
|
||||
It delivers 3 primitive functions:
|
||||
- storage capacity
|
||||
- compute capacity
|
||||
- network capacity
|
||||
|
||||
There is no shell, local nor remote attached to Zero-OS. It does not allow for inbound network connections to happen to the core. Also, given its shell-less nature, the people and organizations, called farmers, that run 3nodes cannot issue any commands nor access its features. In that sense, Zero-OS enables a "zero people" (autonomous) Internet, meaning hackers cannot get in, while also eliminating human error from the paradigm.
|
||||
|
||||
### 3Node
|
||||
|
||||
The ThreeFold_Grid needs hardware/servers to function. Servers of all shapes and sizes can be added to the grid by anyone, anywhere in the world. The production of Internet Capacity on the Threefold Grid is called Farming and people who add these servers to the grid are called Farmers. This is a fully decentralized process and they get rewarded by the means of TFT.
|
||||
|
||||
Farmers download the Zero-OS operating system and boot their servers themselves. Once booted, these servers become 3Nodes. The 3Nodes will register themselves in a database called the TF_Explorer. Once registered in the TF_Explorer, the capacity of the 3Nodes will become available on the TF Grid Explorer. Also, given the autonomous nature of the ThreeFold_Grid, there is no need for any intermediaries between the user and 3Nodes.
|
||||
|
||||
This enables a complete peer-to-peer environment for people to reserve their Internet Capacity directly from the hardware.
|
||||
|
||||
### Smart Contract for IT
|
||||
|
||||
The purpose of the smart contract for IT is to create and enable autonomous IT. Autonomous self-driving IT is possible when we adhere to two principles from start:
|
||||
|
||||
1. Information technology architectures are configured and installed by bots (a ‘smart contract agent’), not people.
|
||||
2. Human beings cannot have access to these architectures and change things.
|
||||
|
||||
While sticking to these principles, it provides the basis to consider and describe everything in a contract type format and to deploy any self-driving and self-healing application on the ThreeFold_Grid.
|
||||
|
||||
Once the smart contract for IT is created, it will be registered in the Blockchain Database in a complete end-to-end process. It will also leave instructions for the 3Nodes in a digital notary system for them to grab the necessary instructions and complete the smart contract.
|
||||
|
||||
Learn more about smart contract for IT [here](smartcontract_it).
|
||||
|
||||
### TFChain
|
||||
|
||||
A blockchain running on the TFGrid stores following information (TFGrid 3.0)
|
||||
|
||||
- registry for all digital twins (identity system, aka phonebook)
|
||||
- registry for all farmers & 3nodes
|
||||
- registry for our reputation system
|
||||
- info as required for the Smart Contract for IT
|
||||
|
||||
This is the hart of our operational system of the TFGrid
|
||||
|
||||
### Peer-to-Peer Network
|
||||
|
||||
The peer-to-peer network allows any zmachine or user to connect with other zmachine or users on the TF Grid securely and creates a private shortest path peer-to-peer network.
|
||||
|
||||
### Web Gateway
|
||||
|
||||
The Web Gateway is a mechanism to connect the private (overlay) networks to the open Internet. By not providing an open and direct path in to the private network, a lot of malicious phishing and hacking attempts are stopped at the Web Gateway level for container applications.
|
1
tosort/technology/layers/experience_layer_intro.md
Normal file
@@ -0,0 +1 @@
|
||||
## Experience Layer
|
7
tosort/technology/layers/technology_layers.md
Normal file
@@ -0,0 +1,7 @@
|
||||
|
||||
!!!include:capacity_layer_intro
|
||||
|
||||
!!!include:autonomous_layer_intro
|
||||
|
||||
!!!include:experience_layer_intro
|
||||
|
17
tosort/technology/primitives/compute/beyond_containers.md
Normal file
@@ -0,0 +1,17 @@
|
||||
## Beyond Containers
|
||||
|
||||

|
||||
|
||||
|
||||
Default features:
|
||||
|
||||
- compatible with Docker
|
||||
- compatible with any Linux workload
|
||||
|
||||
We have following unique advantages:
|
||||
|
||||
- no need to work with images, we work with our unique zos_fs.
|
||||
- every container runs in a dedicated virtual machine providing more security.
|
||||
- the containers talk to each other over a private network: zos_net.
|
||||
- the containers can use web_gw to allow users on the internet connect to the applications as running in their secure containers.
|
||||
- can use core-x to manage the workload.
|
7
tosort/technology/primitives/compute/compute_toc.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Compute
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [ZKube](./zkube.md)
|
||||
- [ZMachine](./zmachine.md)
|
||||
- [CoreX](./corex.md)
|
16
tosort/technology/primitives/compute/corex.md
Normal file
@@ -0,0 +1,16 @@
|
||||
|
||||
# CoreX
|
||||
|
||||

|
||||
|
||||
This tool allows you to manage your ZMachine over web remotely.
|
||||
|
||||
ZMachine process manager
|
||||
|
||||
- Provide a web interface and a REST API to control your processes.
|
||||
- Allow to watch the logs of your processes.
|
||||
- Or use it as a web terminal (access over https to your terminal)!
|
||||
|
||||
!!!def
|
||||
|
||||
!!!include:zos_toc
|
BIN
tosort/technology/primitives/compute/img/container_native.jpg
Normal file
After Width: | Height: | Size: 209 KiB |
BIN
tosort/technology/primitives/compute/img/corex.jpg
Normal file
After Width: | Height: | Size: 177 KiB |
BIN
tosort/technology/primitives/compute/img/kubernetes_0_.jpg
Normal file
After Width: | Height: | Size: 349 KiB |
BIN
tosort/technology/primitives/compute/img/tfgrid_compute_.jpg
Normal file
After Width: | Height: | Size: 272 KiB |
BIN
tosort/technology/primitives/compute/img/zkube_architecture_.jpg
Normal file
After Width: | Height: | Size: 304 KiB |
BIN
tosort/technology/primitives/compute/img/zmachine_zos_.jpg
Normal file
After Width: | Height: | Size: 333 KiB |
25
tosort/technology/primitives/compute/tfgrid_compute.md
Normal file
@@ -0,0 +1,25 @@
|
||||
|
||||
## TFGrid Compute Layer
|
||||
|
||||

|
||||
|
||||
We are more than just Container or VM technology, see [our Beyond Container Document](beyond_containers).
|
||||
|
||||
A 3Node is a Zero-OS enabled computer which is hosted with any of the TF_Farmers.
|
||||
|
||||
There are 4 storage mechanisms which can be used to store your data:
|
||||
|
||||
- ZOS_FS is our dedupe unique filesystem, replaces docker images.
|
||||
- ZOS_Mount is a mounted disk location on SSD, this can be used as faster storage location.
|
||||
- QSFS, this is a super unique storage system, data can never be lost or corrupted. Please be reminded that this storage layer is only meant to be used for secondary storage applications.
|
||||
- ZOS_Disk, a virtual disk technology, only for TFTech OEM partners.
|
||||
|
||||
There are 4 ways how networks can be connected to a Z-Machine.
|
||||
|
||||
- Planetary_network : is a planetary scalable network, we have clients for windows, osx, android and iphone.
|
||||
- zos_net : is a fast end2end encrypted network technology, keep your traffic between your z_machines 100% private.
|
||||
- zos_bridge: connection to a public ipaddress
|
||||
- web_gw: web gateway, a secure way to allow internet traffic reach your secure Z-Machine.
|
||||
|
||||
|
||||
|
27
tosort/technology/primitives/compute/zkube.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# ZKube
|
||||
|
||||
TFGrid is compatible with Kubernetes Technology.
|
||||
|
||||

|
||||
|
||||
Each eVDC as shown above is a full blown Kubernetes deployment.
|
||||
|
||||
### Unique for our Kubernetes implementation
|
||||
|
||||
- The Kubernetes networks are on top of our [ZNet](../network/znet.md) technology which means all traffic between containers and kubernetes hosts is end2end encrypted independent of where your Kubernetes nodes are deployed.
|
||||
- You can mount a QSFS underneath a Kubernetes Node (VM), which means that you can deploy containers on top of QSFS to host unlimited amounts of storage in a super safe way.
|
||||
- You Kubernetes environment is for sure 100% decentralized, you define where you want to deploy your Kubernetes nodes and only you have access to the deployed workloads on the TFGrid.
|
||||
|
||||
### Features
|
||||
|
||||
* integration with znet (efficient, secure encrypted network between the zmachines)
|
||||
* can be easily deployed at the edge
|
||||
* single-tenant!
|
||||
|
||||
### ZMachine Benefits
|
||||
|
||||
* [ZOS Protect](../../zos/benefits/zos_protect.md): no hacking surface to the Zero-Nodes, integrate silicon route of trust
|
||||
|
||||
### Architecture
|
||||
|
||||

|
18
tosort/technology/primitives/compute/zmachine.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# ZMachine
|
||||
|
||||
### Features
|
||||
|
||||
* import from docker (market std for containers)
|
||||
* can be easily deployed at the edge (edge cloud)
|
||||
* single-tenant, fully decentralized!
|
||||
* can deploy unlimited amounts of storage using our qsfs.
|
||||
* [ZOS Protect](../../zos/benefits/zos_protect.md): no hacking surface to the Zero-Nodes, integrate silicon route of trust
|
||||
* [ZOS Filesystem](../storage/qsfs.md): dedupe, zero-install, hacker-proof
|
||||
* [WebGateway](../network/webgw3.md:) intelligent connection between web (internet) and container services
|
||||
* integration with [ZNet](../network/znet.md) (efficient, secure encrypted network between the zmachines)
|
||||
|
||||
### Architecture
|
||||
|
||||

|
||||
|
||||
A ZMachine is running as a virtual machine on top of Zero-OS.
|
BIN
tosort/technology/primitives/network/img/overlay_net1.jpg
Normal file
After Width: | Height: | Size: 202 KiB |
BIN
tosort/technology/primitives/network/img/planet_net_.jpg
Normal file
After Width: | Height: | Size: 267 KiB |
BIN
tosort/technology/primitives/network/img/planetary_lan.jpg
Normal file
After Width: | Height: | Size: 77 KiB |
BIN
tosort/technology/primitives/network/img/planetary_net.jpg
Normal file
After Width: | Height: | Size: 76 KiB |
BIN
tosort/technology/primitives/network/img/redundant_net.jpg
Normal file
After Width: | Height: | Size: 188 KiB |
BIN
tosort/technology/primitives/network/img/webgateway.jpg
Normal file
After Width: | Height: | Size: 122 KiB |
BIN
tosort/technology/primitives/network/img/webgw_scaling.jpg
Normal file
After Width: | Height: | Size: 175 KiB |
BIN
tosort/technology/primitives/network/img/znet_redundancy.jpg
Normal file
After Width: | Height: | Size: 163 KiB |
BIN
tosort/technology/primitives/network/img/znet_znic.jpg
Normal file
After Width: | Height: | Size: 104 KiB |
BIN
tosort/technology/primitives/network/img/znet_znic1.jpg
Normal file
After Width: | Height: | Size: 104 KiB |
BIN
tosort/technology/primitives/network/img/zos_network_overlay.jpg
Normal file
After Width: | Height: | Size: 156 KiB |
7
tosort/technology/primitives/network/network_toc.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Network
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [ZNET](./znet.md)
|
||||
- [ZNIC](./znic.md)
|
||||
- [WebGateway](./webgw3.md)
|
5
tosort/technology/primitives/network/p2pagent.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Peer2Peer Agent
|
||||
|
||||
>TODO
|
||||
|
||||
!!!include:zos_toc
|
54
tosort/technology/primitives/network/planetary_network.md
Normal file
@@ -0,0 +1,54 @@
|
||||

|
||||
|
||||
# Planetary Network
|
||||
|
||||

|
||||
|
||||
|
||||
The planetary network is an overlay network which lives on top of the existing internet or other peer2peer networks created. In this network, everyone is connected to everyone. End-to-end encryption between users of an app and the app running behind the network wall.
|
||||
|
||||
Each user end network point is strongly authenticated and uniquely identified, independent of the network carrier used. There is no need for a centralized firewall or VPN solutions, as there is a circle based networking security in place.
|
||||
|
||||
Benefits :
|
||||
- It finds shortest possible paths between peers
|
||||
- There's full security through end-to-end encrypted messaging
|
||||
- It allows for peer2peer links like meshed wireless
|
||||
- It can survive broken internet links and re-route when needed
|
||||
- It resolves the shortage of IPV4 addresses
|
||||
|
||||
|
||||
Whereas current computer networks depend heavily on very centralized design and configuration, this networking concept breaks this mould by making use of a global spanning tree to form a scalable IPv6 encrypted mesh network. This is a peer2peer implementation of a networking protocol.
|
||||
|
||||
The following table illustrates high-level differences between traditional networks like the internet, and the planetary threefold network:
|
||||
|
||||
| Characteristic | Traditional | Planetary Network |
|
||||
| --------------------------------------------------------------- | ----------- | ----------------- |
|
||||
| End-to-end encryption for all traffic across the network | No | Yes |
|
||||
| Decentralized routing information shared using a DHT | No | Yes |
|
||||
| Cryptographically-bound IPv6 addresses | No | Yes |
|
||||
| Node is aware of its relative location to other nodes | No | Yes |
|
||||
| IPv6 address remains with the device even if moved | No | Yes |
|
||||
| Topology extends gracefully across different mediums, i.e. mesh | No | Yes |
|
||||
|
||||
## What are the problems solved here?
|
||||
|
||||
The internet as we know it today doesn’t conform to a well-defined topology. This has largely happened over time - as the internet has grown, more and more networks have been “bolted together”. The lack of defined topology gives us some unavoidable problems:
|
||||
|
||||
- The routing tables that hold a “map” of the internet are huge and inefficient
|
||||
- There isn’t really any way for a computer to know where it is located on the internet relative to anything else
|
||||
- It’s difficult to examine where a packet will go on its journey from source to destination without actually sending it
|
||||
- It’s very difficult to install reliable networks into locations that change often or are non-static, i.e. wireless mesh networks
|
||||
|
||||
These problems have been partially mitigated (but not really solved) through centralization - rather than your computers at home holding a copy of the global routing table, your ISP does it for you. Your computers and network devices are configured just to “send it upstream” and to let your ISP decide where it goes from there, but this does leave you entirely at the mercy of your ISP who can redirect your traffic anywhere they like and to inspect, manipulate or intercept it.
|
||||
|
||||
In addition, wireless meshing requires you to know a lot about the network around you, which would not typically be the case when you have outsourced this knowledge to your ISP. Many existing wireless mesh routing schemes are not scalable or efficient, and do not bridge well with existing networks.
|
||||
|
||||

|
||||
|
||||
The planetary network is a continuation & implementation of the [Yggdrasil](https://yggdrasil-network.github.io/about.html) network initiative. This technology is in beta but has been proven to work already quite well.
|
||||
|
||||
!!!def alias:planet_net,planetary_net,planetary_network,pan
|
||||
|
||||
!!!include:zos_toc
|
||||
|
||||
> Click [here](manual:planetary_network_connector) to read more about Planetary Network Connector Installation. Click [here](manual:yggdrasil_client) to read more about Planetary Network Installation (advanced).
|
7
tosort/technology/primitives/network/tfgrid_network.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# TFGrid networking
|
||||
|
||||
- znet : private network between zmachines
|
||||
- [Planetary Network](planetary_network) : peer2peer end2end encrypted global network
|
||||
- znic : interface to planetary network
|
||||
- [WebGateway](webgw) : interface between internet and znet
|
||||
|
60
tosort/technology/primitives/network/webgw.md
Normal file
@@ -0,0 +1,60 @@
|
||||
|
||||
|
||||
# WebGW 2.0
|
||||
|
||||
The Web Gateway is a mechanism to connect the private networks to the open Internet, in such a way that there is no direct connection between internet and the secure workloads running in the ZMachines.
|
||||
|
||||

|
||||
|
||||
|
||||
- Separation between where compute workloads are and where services are exposed.
|
||||
- Better Security
|
||||
- Redundant
|
||||
- Each app can be exposed on multiple webgateways at once.
|
||||
- Support for many interfaces...
|
||||
- Helps resolve shortage of IPv4 addresses
|
||||
|
||||
If (parts of) this private overlay network need to be reachable from the Internet, the zmachines initiate a secure connection *to* the web Gateway.
|
||||
|
||||
### Implementation
|
||||
|
||||
It is important to mention that this connection is not a standard network connection, it is a [network socket](https://en.wikipedia.org/wiki/Network_socket) initiated by the container or VM to the web gateway. The container calls out to one or more web gateways and sets up a secure & private socket connection to the web gateway. The type of connection required is defined on the smart contract for IT layer and as such is very secure. There is no IP (TCP/UDP) coming from the internet towards the containers providing more security.
|
||||
|
||||
Up to the Web Gateway Internet traffic follows the same route as for any other network end point: A DNS entry tells the consumers client to what IP address to send traffic to. This endpoint is the public interface of the Web Gateway. That interface accepts the HTTP(s) (or any other TCP) packets and forward the packet payload over the secure socket connection (initiated by the container) to the container.
|
||||
|
||||
No open pipe (NAT plus port forwarding) from the public internet to specific containers in the private (overlay) network exists.
|
||||
|
||||
Web Gateways are created by so called network farmers. Network farmers are people and companies that have access to good connectivity and have a large number of public IP routable IP networks. They provide the facilities (hardware) for Web Gateways to run and terminate a lot of the public inbound and output traffic for the TF Grid. Examples of network farmers are ISP's and (regional, national and international Telcos, internet exchanges etc.
|
||||
|
||||
### Security
|
||||
|
||||
Buy not providing an open and direct path in to the private network a lot of malicious phishing and hacking attempts are stopped at the Web Gateway. By design any private network is meant to have multiple webgateways and by design these Web Gateways exist on different infrastructure in a different location. Sniffing around and finding out what can be done with a Web Gateway might (and will happen) but it will not compromise the containers in your private network.
|
||||
|
||||
### Redundant Network Connection
|
||||
|
||||

|
||||
|
||||
|
||||
### Unlimited Scale
|
||||
|
||||

|
||||
|
||||
|
||||
The network architecture is a pure scale-out network system, it can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users. Supply and demand scale independently, for supply there can be unlimited network farmers providing the web gateways on their own 3nodes and unlimited compute farmers providing 3nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprise and this demand side is exponentially growing for data processing and storage use cases.
|
||||
|
||||
### Network Wall (future)
|
||||
|
||||
see [Network Wall](network_wall)
|
||||
|
||||
## Roadmap
|
||||
|
||||
Above described Web Gateway is for 2.0.
|
||||
|
||||
For 3.0 we start with a HTTP(S) proxy over Planetary network connection. Not all features from WebGW 2.0 have been ported.
|
||||
|
||||
Further future, we envisage support for many other protocols: sql, redis, udp, ...
|
||||
|
||||
!!!def alias:web_gw,zos_web_gateway
|
||||
|
||||
!!!include:zos_toc
|
||||
|
40
tosort/technology/primitives/network/webgw3.md
Normal file
@@ -0,0 +1,40 @@
|
||||
|
||||
|
||||
# WebGW 2.0
|
||||
|
||||
The Web Gateway is a mechanism to connect the private networks to the open Internet, in such a way that there is no direct connection between internet and the secure workloads running in the ZMachines.
|
||||
|
||||

|
||||
|
||||
|
||||
- Separation between where compute workloads are and where services are exposed.
|
||||
- Redundant
|
||||
- Each app can be exposed on multiple webgateways at once.
|
||||
- Support for many interfaces...
|
||||
- Helps resolve shortage of IPv4 addresses
|
||||
|
||||
### Implementation
|
||||
|
||||
Some 3nodes supports gateway functionality (configured by the farmers). A 3node with gateway config can then accept gateway workloads and then forward traffic to ZMachines that only has yggdrasil (planetary network) or Ipv6 addresses.
|
||||
|
||||
The gateway workloads consists of a name (prefix) that need to be reserved on the block chain first. Then the list of backend IPs. There are other flags that can be set to control automatic TLS (please check terraform documentations for the exact details of a reservation)
|
||||
|
||||
Once the 3node receives this workloads, the network configure proxy for this name and the yggdrasil ips.
|
||||
|
||||
### Security
|
||||
|
||||
ZMachines has to have an yggdrasil IP or any other IPv6 (also IPv4 are accepted) but it means that any person who is connected to the yggdrasil network, can also reach the ZMachine without the need for a proxy.
|
||||
|
||||
So ti's up to the ZMachine owner/maintainer to make sure it is secured and only have the required ports open.
|
||||
|
||||
### Redundant Network Connection
|
||||
|
||||

|
||||
|
||||
|
||||
### Unlimited Scale
|
||||
|
||||

|
||||
|
||||
|
||||
The network architecture is a pure scale-out network system, it can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users. Supply and demand scale independently, for supply there can be unlimited network farmers providing the web gateways on their own 3nodes and unlimited compute farmers providing 3nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprise and this demand side is exponentially growing for data processing and storage use cases.
|
31
tosort/technology/primitives/network/znet.md
Normal file
@@ -0,0 +1,31 @@
|
||||
|
||||
|
||||
# ZNET
|
||||
|
||||
Decentralized networking platform allowing any compute and storage workload to be connected together on a private (overlay) network and exposed to the existing internet network. The Peer2Peer network platform allows any workload to be connected over secure encrypted networks which will look for the shortest path between the nodes.
|
||||
|
||||

|
||||
|
||||
|
||||
### Secure mesh overlay network (peer2peer)
|
||||
|
||||
Z_NET is the foundation of any architecture running on the TF Grid. It can be seen as a virtual private datacenter and the network allows all of the *N* containers to connected to all of the *(N-1)* other containers. Any network connection is a secure network connection between your containers and creates peer 2 peer network between containers.
|
||||
|
||||

|
||||
|
||||
No connection is made with the internet.The ZNet is a single tenant network and by default not connected to the public internet. Everything stays private. For connecting to the public internet a Web Gateway is included in the product to allows for public access if and when required.
|
||||
|
||||
### Redundancy
|
||||
|
||||
As integrated with [WebGW](./webgw3.md):
|
||||
|
||||

|
||||
|
||||
- Any app can get (securely) connected to the internet by any chosen IP address made available by ThreeFold network farmers through WebGW.
|
||||
- An app can be connected to multiple web gateways at once, the DNS round robin principle will provide load balancing and redundancy.
|
||||
- An easy clustering mechanism where web gateways and nodes can be lost and the public service will still be up and running.
|
||||
- Easy maintenance. When containers are moved or re-created the same end user connection can be reused as that connection is terminated on the Web Gateway. The moved or newly created Web Gateway will recreate the socket to the Web Gateway and receive inbound traffic.
|
||||
|
||||
### Interfaces in Zero-OS
|
||||
|
||||

|
10
tosort/technology/primitives/network/znic.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# ZNIC
|
||||
|
||||
ZNIC is the network interface which is connected to Z_Machine.
|
||||
|
||||
Can be implemented as interface to
|
||||
|
||||
- planetary_network.
|
||||
- public ip address on a Zero-OS.
|
||||
|
||||

|
25
tosort/technology/primitives/primitives_toc.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Primitives
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Compute](./compute/compute_toc.md)
|
||||
- [ZKube](./compute/zkube.md)
|
||||
- [ZMachine](./compute/zmachine.md)
|
||||
- [CoreX](./compute/corex.md)
|
||||
- [Storage](./storage/storage_toc.md)
|
||||
- [ZOS Filesystem](./storage/zos_fs.md)
|
||||
- [ZOS Mount](./storage/zmount.md)
|
||||
- [Quantum Safe File System](./storage/qsfs.md)
|
||||
- [Zero-DB](./storage/zdb.md)
|
||||
- [Zero-Disk](./storage/zdisk.md)
|
||||
- [Network](./network/network_toc.md)
|
||||
- [ZNET](./network/znet.md)
|
||||
- [ZNIC](./network/znic.md)
|
||||
- [WebGateway](./network/webgw3.md)
|
||||
- [Zero-OS Advantages](../zos/benefits/zos_advantages_toc.md)
|
||||
- [Zero-OS Installation](../zos/benefits/zero_install.md)
|
||||
- [Unbreakable Storage](../zos/benefits/unbreakable_storage.md)
|
||||
- [Zero Hacking Surface](../zos/benefits/zero_hacking_surface.md)
|
||||
- [Booting Process](../zos/benefits/zero_boot.md)
|
||||
- [Deterministic Deployment](../zos/benefits/deterministic_deployment.md)
|
||||
- [Zero-OS Protect](../zos/benefits/zos_protect.md)
|
BIN
tosort/technology/primitives/storage/img/zdb_arch.jpg
Normal file
After Width: | Height: | Size: 118 KiB |
BIN
tosort/technology/primitives/storage/img/zmount.jpg
Normal file
After Width: | Height: | Size: 66 KiB |
BIN
tosort/technology/primitives/storage/img/zos_zstor.jpg
Normal file
After Width: | Height: | Size: 99 KiB |
24
tosort/technology/primitives/storage/qsfs.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Quantum Safe Filesystem
|
||||
|
||||

|
||||
|
||||
presents itself as a filesystem to the ZMachine.
|
||||
|
||||
### Benefits
|
||||
|
||||
- Safe
|
||||
- Hacker Proof
|
||||
- Ultra Reliable
|
||||
- Low Overhead
|
||||
- Ultra Scalable
|
||||
- Self Healing = recovers service automatically in the event of outage with no human
|
||||
|
||||
|
||||
### Can be used as
|
||||
|
||||
- backup and archive system
|
||||
- Blockchain Storage Backend (OEM ONLY)
|
||||
|
||||
### Implementation
|
||||
|
||||
> Is using qsss inside.
|
9
tosort/technology/primitives/storage/storage_toc.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Storage
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [ZOS Filesystem](./zos_fs.md)
|
||||
- [ZOS Mount](./zmount.md)
|
||||
- [Quantum Safe File System](./qsfs.md)
|
||||
- [Zero-DB](./zdb.md)
|
||||
- [Zero-Disk](./zdisk.md)
|
7
tosort/technology/primitives/storage/zdb.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# ZOS-DB (ZDB)
|
||||
|
||||

|
||||
|
||||
0-db is a fast and efficient key-value store redis-protocol compatible, which makes data persistent inside an always append datafile, with namespaces support.
|
||||
|
||||
> ZDB is being used as backend storage for [Quantum Safe Filesystem](./qsfs.md).
|
9
tosort/technology/primitives/storage/zdisk.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# ZOS_Disk
|
||||
|
||||
Virtual disk creates the possibility to create and use virtual disks which can be attached to containers (and virtual machines).
|
||||
|
||||
The technology is designed to be redundant without having to do anything.
|
||||
|
||||
## Roadmap
|
||||
|
||||
- The virtual disk technology is available for OEM's only, contact TF_Tech.
|
7
tosort/technology/primitives/storage/zmount.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# ZOS_Mount
|
||||
|
||||
A SSD storage location on which can be written upon inside a VMachine and VKube.
|
||||
|
||||
The SSD storage location is mounted on a chosen path inside your Z-Machine.
|
||||
|
||||

|
28
tosort/technology/primitives/storage/zos_fs.md
Normal file
@@ -0,0 +1,28 @@
|
||||
|
||||
# ZOS FileSystem (ZOS-FS)
|
||||
|
||||
A deduped filesystem which is more efficient compared to images as used in other Virtual Machine technology.
|
||||
|
||||
## Uses FLIST Inside
|
||||
|
||||
In Zero-OS, `flist` is the format used to store zmachine images. This format is made to provide
|
||||
a complete mountable remote filesystem but downloading only the files contents that you actually needs.
|
||||
|
||||
In practice, Flist itself is a small database which contains metadata about files and directories and file payload are stored on a tfgrid hub. You only need to download payload when you need it, this dramatically reduce zmachine boot time, bandwidth and disk overhead.
|
||||
|
||||
### Why this ZFlist Concept
|
||||
|
||||
Have you ever been in the following situation: you need two small files but they are embedded in a large archive. How to get to those 2 files in an efficient way? What a disappointment when you see this archive is 4 GB large and you only need 4 files of 2 MB inside. You'll need to download the full archive, store it somewhere to extract only what you need. Time, effort and bandwidth wasted.
|
||||
|
||||
You want to start a Docker container and the base image you want to use is 2 GB. What do you need to do before being able to use your container ? Waiting to get the 2 GB downloaded. This problem exists everywhere but in Europe and the US the bandwidth speeds are such that this does not present a real problem anymore, hence none of the leading (current) tech companies are looking for solutions for this.
|
||||
|
||||
We believe that there should a smarter way of dealing with this then simply throwing larger bandwidth at the problem: What if you could only download the files you actually want and not the full blob (archive, image, whatever...).
|
||||
|
||||
ZFList is splitting metadata and data. Metadata is referential information about everything you need to know about content of the archive, but without the payload. Payload is the content of the referred files. The ZFList is exactly that: it consists of metadata with references that point to where to get the payload itself. So if you don't need it you won't get it.
|
||||
|
||||
As soon as you have the flist mounted, you can see the full directory tree, and walk around it. The files are only downloaded and presented at moment that you try to access them. In other words, every time you want to read a file, or modify it, Zero FS will download it, so that the data is available too. You only download on-the-fly what you need which reduces dramatically the bandwidth requirement.
|
||||
|
||||
|
||||
## Benefits
|
||||
|
||||
- Efficient usage of bandwidth makes this service perform with and without (much) bandwidth
|
BIN
tosort/technology/qsss/img/filesystem_abstract.jpg
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
tosort/technology/qsss/img/qsss_intro_0_.jpg
Normal file
After Width: | Height: | Size: 315 KiB |
BIN
tosort/technology/qsss/interfaces_usecases/img/filesystem.jpg
Normal file
After Width: | Height: | Size: 15 KiB |
BIN
tosort/technology/qsss/interfaces_usecases/img/http.jpg
Normal file
After Width: | Height: | Size: 16 KiB |
BIN
tosort/technology/qsss/interfaces_usecases/img/hyperdrive.jpg
Normal file
After Width: | Height: | Size: 13 KiB |
BIN
tosort/technology/qsss/interfaces_usecases/img/ipfs.jpg
Normal file
After Width: | Height: | Size: 10 KiB |
After Width: | Height: | Size: 192 KiB |
BIN
tosort/technology/qsss/interfaces_usecases/img/nft_storage.jpg
Normal file
After Width: | Height: | Size: 154 KiB |
After Width: | Height: | Size: 138 KiB |
BIN
tosort/technology/qsss/interfaces_usecases/img/syncthing.jpg
Normal file
After Width: | Height: | Size: 23 KiB |
97
tosort/technology/qsss/interfaces_usecases/nft_storage.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Quantum Safe Storage System for NFT
|
||||
|
||||

|
||||
|
||||
The owner of the NFT can upload the data using one of our supported interfaces
|
||||
|
||||
- http upload (everything possible on https://nft.storage/ is also possible on our system)
|
||||
- filesystem
|
||||
|
||||
Every person in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available everywhere in the world using multiple interfaces again (IPFS, HTTP(S), ...). Caching happens on global level. No special software or account on threefold is needed to do this.
|
||||
|
||||
The NFT system uses a super reliable storage system underneath which is sustainable for the planet (green) and ultra secure and private. The NFT owner also owns the data.
|
||||
|
||||
|
||||
## Benefits
|
||||
|
||||
#### Persistence = owned by the data user (as represented by digital twin)
|
||||
|
||||

|
||||
|
||||
Is not based on a shared-all architecture.
|
||||
|
||||
Whoever stores the data has full control over
|
||||
|
||||
- where data is stored (specific locations)
|
||||
- redundancy policy used
|
||||
- how long should the data be kept
|
||||
- CDN policy (where should data be available and how long)
|
||||
|
||||
|
||||
#### Reliability
|
||||
|
||||
- data cannot be corrupted
|
||||
- data cannot be lost
|
||||
- each time data is fetched back hash (fingerprint) is checked, if issues autorecovery happens
|
||||
- all data is encrypted and compressed (unique per storage owner)
|
||||
- data owner chooses the level of redundancy
|
||||
|
||||
#### Lookup
|
||||
|
||||
- multi URL & storage network support (see further the interfaces section)
|
||||
- IPFS, HyperDrive URL schema
|
||||
- unique DNS schema (with long key which is globally unique)
|
||||
|
||||
#### CDN support (with caching)
|
||||
|
||||
Each file (movie, image) stored is available on many places worldwide.
|
||||
|
||||
Each file gets a unique url pointing to the data which can be retrieved on all locations.
|
||||
|
||||
Caching happens on each endpoint.
|
||||
|
||||
#### Self Healing & Auto Correcting Storage Interface
|
||||
|
||||
Any corruption e.g. bitrot gets automatically detected and corrected.
|
||||
|
||||
In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy.
|
||||
|
||||
#### Storage Algoritm = Uses Quantum Safe Storage System as base
|
||||
|
||||
Not even a quantum computer can hack data as stored on our QSSS.
|
||||
|
||||
The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems.
|
||||
|
||||
It uses forward looking error correcting codes inside.
|
||||
|
||||
#### Green
|
||||
|
||||
Storage uses upto 10x less energy compared to classic replicated system.
|
||||
|
||||
#### Multi Interface
|
||||
|
||||
The stored data is available over multiple interfaces at once.
|
||||
|
||||
| interface | |
|
||||
| -------------------------- | ----------------------- |
|
||||
| IPFS |  |
|
||||
| HyperDrive / HyperCore |  |
|
||||
| http(s) on top of FreeFlow |  |
|
||||
| syncthing |  |
|
||||
| filesystem |  |
|
||||
|
||||
This allows ultimate flexibility from enduser perspective.
|
||||
|
||||
The object (video,image) can easily be embedded in any website or other representation which supports http.
|
||||
|
||||
|
||||
## More Info
|
||||
|
||||
* [Zero-OS overview](zos)
|
||||
* [Quantum Safe Storage System](qsss_home)
|
||||
* [Quantum Safe Storage Algorithm](qss_algorithm)
|
||||
* [Smart Contract For IT Layer](smartcontract_it)
|
||||
|
||||
|
||||
|
||||
!!!def alias:nft_storage,nft_storage_system
|
12
tosort/technology/qsss/interfaces_usecases/qss_use_cases.md
Normal file
@@ -0,0 +1,12 @@
|
||||
## Quantum Safe Storage use cases
|
||||
|
||||
### Backup
|
||||
|
||||
A perfect use case for the QSS is backup. Specific capbabilities needed for backup are a core part of a proper backup policy. Characteristics of QSS that makle backups secure, scalable, efficient and sustainable are:
|
||||
- physical storage devices are always append. The lowest level of the storage devices, ZDB's, are storage engines that work by design as an always append storage device.
|
||||
- easy provision of these ZDB's makes them almost like old fashioned tape devices that you have on a rotary schedule. Having this capability make is very visible and possible to use, store and phase out stored data in a way that is auditable and can be made very transparant
|
||||
-
|
||||
|
||||
### Archiving
|
||||
|
||||
###
|
15
tosort/technology/qsss/interfaces_usecases/s3_interface.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# S3 Service
|
||||
|
||||
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [quantumsafe_filesystem](quantumsafe_filesystem).
|
||||
|
||||
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
|
||||
|
||||
Thanks to our quantum safe storage layer, you could build fast, robust and reliable storage and archiving solutions.
|
||||
|
||||
A typical setup would look like:
|
||||
|
||||

|
||||
|
||||
> TODO: link to manual on cloud how to deploy minio, using helm (3.0 release)
|
||||
|
||||
!!!def alias:s3_storage
|
297
tosort/technology/qsss/manual/qsfs_setup.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# QSFS getting started on ubuntu setup
|
||||
|
||||
## Get components
|
||||
|
||||
The following steps can be followed to set up a qsfs instance on a fresh
|
||||
ubuntu instance.
|
||||
|
||||
- Install the fuse kernel module (`apt-get update && apt-get install fuse3`)
|
||||
- Install the individual components, by downloading the latest release from the
|
||||
respective release pages:
|
||||
- 0-db-fs: https://github.com/threefoldtech/0-db-fs/releases
|
||||
- 0-db: https://github.com/threefoldtech/0-db, if multiple binaries
|
||||
are available in the assets, choose the one ending in `static`
|
||||
- 0-stor: https://github.com/threefoldtech/0-stor_v2/releases, if
|
||||
multiple binaries are available in the assets, choose the one
|
||||
ending in `musl`
|
||||
- Make sure all binaries are executable (`chmod +x $binary`)
|
||||
|
||||
## Setup and run 0-stor
|
||||
|
||||
There are instructions below for a local 0-stor configuration. You can also deploy an eVDC and use the [provided 0-stor configuration](evdc_storage) for a simple cloud hosted solution.
|
||||
|
||||
We will run 6 0-db instances as backends for 0-stor. 4 are used for the
|
||||
metadata, 2 are used for the actual data. The metadata always consists
|
||||
of 4 nodes. The data backends can be increased. You can choose to either
|
||||
run 7 separate 0-db processes, or a single process with 7 namespaces.
|
||||
For the purpose of this setup, we will start 7 separate processes, as
|
||||
such:
|
||||
|
||||
> This assumes you have moved the download 0-db binary to `/tmp/0-db`
|
||||
|
||||
```bash
|
||||
/tmp/0-db --background --mode user --port 9990 --data /tmp/zdb-meta/zdb0/data --index /tmp/zdb-meta/zdb0/index
|
||||
/tmp/0-db --background --mode user --port 9991 --data /tmp/zdb-meta/zdb1/data --index /tmp/zdb-meta/zdb1/index
|
||||
/tmp/0-db --background --mode user --port 9992 --data /tmp/zdb-meta/zdb2/data --index /tmp/zdb-meta/zdb2/index
|
||||
/tmp/0-db --background --mode user --port 9993 --data /tmp/zdb-meta/zdb3/data --index /tmp/zdb-meta/zdb3/index
|
||||
|
||||
/tmp/0-db --background --mode seq --port 9980 --data /tmp/zdb-data/zdb0/data --index /tmp/zdb-data/zdb0/index
|
||||
/tmp/0-db --background --mode seq --port 9981 --data /tmp/zdb-data/zdb1/data --index /tmp/zdb-data/zdb1/index
|
||||
/tmp/0-db --background --mode seq --port 9982 --data /tmp/zdb-data/zdb2/data --index /tmp/zdb-data/zdb2/index
|
||||
```
|
||||
|
||||
Now that the data storage is running, we can create the config file for
|
||||
0-stor. The (minimal) config for this example setup will look as follows:
|
||||
|
||||
```toml
|
||||
minimal_shards = 2
|
||||
expected_shards = 3
|
||||
redundant_groups = 0
|
||||
redundant_nodes = 0
|
||||
socket = "/tmp/zstor.sock"
|
||||
prometheus_port = 9100
|
||||
zdb_data_dir_path = "/tmp/zdbfs/data/zdbfs-data"
|
||||
max_zdb_data_dir_size = 25600
|
||||
|
||||
[encryption]
|
||||
algorithm = "AES"
|
||||
key = "000001200000000001000300000004000a000f00b00000000000000000000000"
|
||||
|
||||
[compression]
|
||||
algorithm = "snappy"
|
||||
|
||||
[meta]
|
||||
type = "zdb"
|
||||
|
||||
[meta.config]
|
||||
prefix = "someprefix"
|
||||
|
||||
[meta.config.encryption]
|
||||
algorithm = "AES"
|
||||
key = "0101010101010101010101010101010101010101010101010101010101010101"
|
||||
|
||||
[[meta.config.backends]]
|
||||
address = "[::1]:9990"
|
||||
|
||||
[[meta.config.backends]]
|
||||
address = "[::1]:9991"
|
||||
|
||||
[[meta.config.backends]]
|
||||
address = "[::1]:9992"
|
||||
|
||||
[[meta.config.backends]]
|
||||
address = "[::1]:9993"
|
||||
|
||||
[[groups]]
|
||||
[[groups.backends]]
|
||||
address = "[::1]:9980"
|
||||
|
||||
[[groups.backends]]
|
||||
address = "[::1]:9981"
|
||||
|
||||
[[groups.backends]]
|
||||
address = "[::1]:9982"
|
||||
```
|
||||
|
||||
> A full explanation of all options can be found in the 0-stor readme:
|
||||
https://github.com/threefoldtech/0-stor_v2/#config-file-explanation
|
||||
|
||||
This guide assumes the config file is saved as `/tmp/zstor_config.toml`.
|
||||
|
||||
Now `zstor` can be started. Assuming the downloaded binary was saved as
|
||||
`/tmp/zstor`:
|
||||
|
||||
`/tmp/zstor -c /tmp/zstor_config.toml monitor`. If you don't want the
|
||||
process to block your terminal, you can start it in the background:
|
||||
`nohup /tmp/zstor -c /tmp/zstor_config.toml monitor &`.
|
||||
|
||||
## Setup and run 0-db
|
||||
|
||||
First we will get the hook script. The hook script can be found in the
|
||||
[quantum_storage repo on github](https://github.com/threefoldtech/quantum-storage).
|
||||
A slightly modified version is found here:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -ex
|
||||
|
||||
action="$1"
|
||||
instance="$2"
|
||||
zstorconf="/tmp/zstor_config.toml"
|
||||
zstorbin="/tmp/zstor"
|
||||
|
||||
if [ "$action" == "ready" ]; then
|
||||
${zstorbin} -c ${zstorconf} test
|
||||
exit $?
|
||||
fi
|
||||
|
||||
if [ "$action" == "jump-index" ]; then
|
||||
namespace=$(basename $(dirname $3))
|
||||
if [ "${namespace}" == "zdbfs-temp" ]; then
|
||||
# skipping temporary namespace
|
||||
exit 0
|
||||
fi
|
||||
|
||||
tmpdir=$(mktemp -p /tmp -d zdb.hook.XXXXXXXX.tmp)
|
||||
dirbase=$(dirname $3)
|
||||
|
||||
# upload dirty index files
|
||||
for dirty in $5; do
|
||||
file=$(printf "i%d" $dirty)
|
||||
cp ${dirbase}/${file} ${tmpdir}/
|
||||
done
|
||||
|
||||
${zstorbin} -c ${zstorconf} store -s -d -f ${tmpdir} -k ${dirbase} &
|
||||
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$action" == "jump-data" ]; then
|
||||
namespace=$(basename $(dirname $3))
|
||||
if [ "${namespace}" == "zdbfs-temp" ]; then
|
||||
# skipping temporary namespace
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# backup data file
|
||||
${zstorbin} -c ${zstorconf} store -s --file "$3"
|
||||
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$action" == "missing-data" ]; then
|
||||
# restore missing data file
|
||||
${zstorbin} -c ${zstorconf} retrieve --file "$3"
|
||||
exit $?
|
||||
fi
|
||||
|
||||
# unknown action
|
||||
exit 1
|
||||
```
|
||||
|
||||
> This guide assumes the file is saved as `/tmp/zdbfs/zdb-hook.sh. Make sure the
|
||||
> file is executable, i.e. chmod +x /tmp/zdbfs/zdb-hook.sh`
|
||||
|
||||
The local 0-db which is used by 0-db-fs can be started as follows:
|
||||
|
||||
```bash
|
||||
/tmp/0-db \
|
||||
--index /tmp/zdbfs/index \
|
||||
--data /tmp/zdbfs/data \
|
||||
--datasize 67108864 \
|
||||
--mode seq \
|
||||
--hook /tmp/zdbfs/zdb-hook.sh \
|
||||
--background
|
||||
```
|
||||
|
||||
## Setup and run 0-db-fs
|
||||
|
||||
Finally, we will start 0-db-fs. This guides opts to mount the fuse
|
||||
filesystem in `/mnt`. Again, assuming the 0-db-fs binary was saved as
|
||||
`/tmp/0-db-fs`:
|
||||
|
||||
```bash
|
||||
/tmp/0-db-fs /mnt -o autons -o background
|
||||
```
|
||||
|
||||
You should now have the qsfs filesystem mounted at `/mnt`. As you write
|
||||
data, it will save it in the local 0-db, and it's data containers will
|
||||
be periodically encoded and uploaded to the backend data storage 0-db's.
|
||||
The data files in the local 0-db will never occupy more than 25GiB of
|
||||
space (as configured in the 0-stor config file). If a data container is
|
||||
removed due to space constraints, and data inside of it needs to be
|
||||
accessed by the filesystem (e.g. a file is being read), then the data
|
||||
container is recovered from the backend storage 0-db's by 0-stor, and
|
||||
0-db can subsequently serve this data to 0-db-fs.
|
||||
|
||||
### 0-db-fs limitation
|
||||
|
||||
Any workload should be supported on this filesystem, with some exceptions:
|
||||
|
||||
- Opening a file in 'always append mode' will not have the expected behavior
|
||||
- There is no support of O_TMPFILE by fuse layer, which is a feature required by
|
||||
overlayfs, thus this is not supported. Overlayfs is used by Docker for example.
|
||||
|
||||
## docker setup
|
||||
|
||||
It is possible to run the zstor in a docker container. First, create a data directory
|
||||
on your host. Then, save the config file in the data directory as `zstor.toml`. Ensure
|
||||
the storage 0-db's are running as desribed above. Then, run the docker container
|
||||
as such:
|
||||
|
||||
```
|
||||
docker run -ti --privileged --rm --network host --name fstest -v /path/to/data:/data -v /mnt:/mnt:shared azmy/qsfs
|
||||
```
|
||||
|
||||
The filesystem is now available in `/mnt`.
|
||||
|
||||
## Autorepair
|
||||
|
||||
Autorepair automatically repairs object stored in the backend when one or more shards
|
||||
are not reachable anymore. It does this by periodically checking if all the backends
|
||||
are still reachable. If it detects that one or more of the backends used by an encoded
|
||||
object are not reachable, the healthy shards are downloaded, the object is restored
|
||||
and encoded again (possibly with a new config, if it has since changed), and uploaded
|
||||
again.
|
||||
|
||||
Autorepair does not validate the integrity of individual shards. This is protectected
|
||||
against by having multiple spare (redundant) shards for an object. Corrupt shards
|
||||
are detected when the object is rebuild, and removed before attempting to rebuild.
|
||||
Autorepair also does not repair the metadata of objects.
|
||||
|
||||
## Monitoring, alerting and statistics
|
||||
|
||||
0-stor collects metrics about the system. It can be configured with a 0-db-fs mountpoint,
|
||||
which will trigger 0-stor to collect 0-db-fs statistics, next to some 0-db statistics
|
||||
which are always collected. If the `prometheus_port` config option is set, 0-stor
|
||||
will serve metrics on this port for scraping by prometheus. You can then set up
|
||||
graphs and alerts in grafana. Some examples include: disk space used vs available
|
||||
per 0-db backend, total entries in 0-db backends, which backends are tracked, ...
|
||||
When 0-db-fs monitoring is enabled, statistics are also exported about the filesystem
|
||||
itself, such as read/write speeds, syscalls, and internal metrics
|
||||
|
||||
For a full overview of all available stats, you can set up a prometheus scraper against
|
||||
a running instance, and use the embedded promQl to see everything available.
|
||||
|
||||
## Data safety
|
||||
|
||||
As explained in the auto repair section, data is periodically checked and rebuild if
|
||||
0-db backends become unreachable. This ensures that data, once stored, remains available,
|
||||
as long as the metadata is still present. When needed, the system can be expanded with more
|
||||
0-db backends, and the encoding config can be changed if needed (e.g. to change encryption keys).
|
||||
|
||||
## Performance
|
||||
|
||||
Qsfs is not a high speed filesystem, nor is it a distributed filesystem. It is intended to
|
||||
be used for archive purposes. For this reason, the qsfs stack focusses on data safety first.
|
||||
Where needed, reliability is chosen over availability (i.e. we won't write data if we can't
|
||||
guarantee all the conditions in the required storage profile is met).
|
||||
|
||||
With that being said, there are currently 2 limiting factors in the setup:
|
||||
- speed of the disk on which the local 0-db is running
|
||||
- network
|
||||
|
||||
The first is the speed of the disk for the local 0-db. This imposes a hard limit on
|
||||
the throughput of the filesystem. Performance testing has shown that write speeds
|
||||
on the filesystem reach performance of roughly 1/3rd of the raw performance of the
|
||||
disk for writing, and 1/2nd of the read performance. Note that in the case of _very_
|
||||
fast disks (mostly NVMe SSD's), the cpu might become a bottleneck if it is old and
|
||||
has a low clock speed. Though this should not be a problem.
|
||||
|
||||
The network is more of a soft cap. All 0-db data files will be encoded and distributed
|
||||
over the network. This means that the upload speed of the node needs to be able to
|
||||
handle this data througput. In the case of random data (which is not compressable),
|
||||
the required upload speed would be the write speed of the 0-db-fs, increased by the
|
||||
overhead generated by the storage policy. There is no feedback to 0-db-fs if the upload
|
||||
of data is lagging behind. This means that in cases where a sustained high speed write
|
||||
load is applied, the local 0-db might eventually grow bigger than the configured size limit
|
||||
until the upload managed to catch up. If this happens for prolonged periods of time, it
|
||||
is technically possible to run out of space on the disk. For this reason, you should
|
||||
always have some extra space available on the disk to account for temprorary cache
|
||||
excess.
|
||||
|
||||
When encoded data needs to be recovered from backend nodes (if it is not in cache),
|
||||
the read speed will be equal to the connection speed of the slowest backend, as all
|
||||
shards are recovered before the data is build. This means that recovery of historical
|
||||
data will generally be a slow process. Since we primarily focus on archive storage,
|
||||
we do not consider this a priority.
|
9
tosort/technology/qsss/product/concept/img/create_png
Executable file
@@ -0,0 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
for name in ./*.mmd
|
||||
do
|
||||
output=$(basename $name mmd)png
|
||||
echo $output
|
||||
mmdc -i $name -o $output -w 4096 -H 2160 -b transparant
|
||||
echo $name
|
||||
done
|
13
tosort/technology/qsss/product/concept/img/data_origin.mmd
Normal file
@@ -0,0 +1,13 @@
|
||||
graph TD
|
||||
subgraph Data Origin
|
||||
file[Large chunk of data = part_1part_2part_3part_4]
|
||||
parta[part_1]
|
||||
partb[part_2]
|
||||
partc[part_3]
|
||||
partd[part_4]
|
||||
file -.- |split part_1|parta
|
||||
file -.- |split part_2|partb
|
||||
file -.- |split part 3|partc
|
||||
file -.- |split part 4|partd
|
||||
parta --> partb --> partc --> partd
|
||||
end
|
@@ -0,0 +1,20 @@
|
||||
graph TD
|
||||
subgraph Data Substitution
|
||||
parta[part_1]
|
||||
partb[part_2]
|
||||
partc[part_3]
|
||||
partd[part_4]
|
||||
parta -.-> vara[ A = part_1]
|
||||
partb -.-> varb[ B = part_2]
|
||||
partc -.-> varc[ C = part_3]
|
||||
partd -.-> vard[ D = part_4]
|
||||
end
|
||||
subgraph Create equations with the data parts
|
||||
eq1[A + B + C + D = 6]
|
||||
eq2[A + B + C - D = 3]
|
||||
eq3[A + B - C - D = 10]
|
||||
eq4[ A - B - C - D = -4]
|
||||
eq5[ A - B + C + D = 0]
|
||||
eq6[ A - B - C + D = 5]
|
||||
vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6
|
||||
end
|
@@ -0,0 +1,44 @@
|
||||
graph TD
|
||||
subgraph Data Origin
|
||||
file[Large chunk of data = part_1part_2part_3part_4]
|
||||
parta[part_1]
|
||||
partb[part_2]
|
||||
partc[part_3]
|
||||
partd[part_4]
|
||||
file -.- |split part_1|parta
|
||||
file -.- |split part_2|partb
|
||||
file -.- |split part 3|partc
|
||||
file -.- |split part 4|partd
|
||||
parta --> partb --> partc --> partd
|
||||
parta -.-> vara[ A = part_1]
|
||||
partb -.-> varb[ B = part_2]
|
||||
partc -.-> varc[ C = part_3]
|
||||
partd -.-> vard[ D = part_4]
|
||||
end
|
||||
subgraph Create equations with the data parts
|
||||
eq1[A + B + C + D = 6]
|
||||
eq2[A + B + C - D = 3]
|
||||
eq3[A + B - C - D = 10]
|
||||
eq4[ A - B - C - D = -4]
|
||||
eq5[ A - B + C + D = 0]
|
||||
eq6[ A - B - C + D = 5]
|
||||
vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6
|
||||
end
|
||||
subgraph Disk 1
|
||||
eq1 --> |store the unique equation, not the parts|zdb1[A + B + C + D = 6]
|
||||
end
|
||||
subgraph Disk 2
|
||||
eq2 --> |store the unique equation, not the parts|zdb2[A + B + C - D = 3]
|
||||
end
|
||||
subgraph Disk 3
|
||||
eq3 --> |store the unique equation, not the parts|zdb3[A + B - C - D = 10]
|
||||
end
|
||||
subgraph Disk 4
|
||||
eq4 --> |store the unique equation, not the parts|zdb4[A - B - C - D = -4]
|
||||
end
|
||||
subgraph Disk 5
|
||||
eq5 --> |store the unique equation, not the parts|zdb5[ A - B + C + D = 0]
|
||||
end
|
||||
subgraph Disk 6
|
||||
eq6 --> |store the unique equation, not the parts|zdb6[A - B - C + D = 5]
|
||||
end
|
@@ -0,0 +1,34 @@
|
||||
graph TD
|
||||
subgraph Local laptop, computer or server
|
||||
user[End User]
|
||||
protocol[Storage protocol]
|
||||
qsfs[Filesystem on local OS]
|
||||
0store[Quantum Safe storage engine]
|
||||
end
|
||||
subgraph Grid storage - metadata
|
||||
etcd1[ETCD-1]
|
||||
etcd2[ETCD-2]
|
||||
etcd3[ETCD-3]
|
||||
end
|
||||
subgraph Grid storage - zero proof data
|
||||
zdb1[ZDB-1]
|
||||
zdb2[ZDB-2]
|
||||
zdb3[ZDB-3]
|
||||
zdb4[ZDB-4]
|
||||
zdb5[ZDB-5]
|
||||
zdb6[ZDB-6]
|
||||
zdb7[ZDB-7]
|
||||
user -.- protocol
|
||||
protocol -.- qsfs
|
||||
qsfs --- 0store
|
||||
0store --- etcd1
|
||||
0store --- etcd2
|
||||
0store --- etcd3
|
||||
0store <-.-> zdb1[ZDB-1]
|
||||
0store <-.-> zdb2[ZDB-2]
|
||||
0store <-.-> zdb3[ZDB-3]
|
||||
0store <-.-> zdb4[ZDB-4]
|
||||
0store <-.-> zdb5[ZDB-5]
|
||||
0store <-.-> zdb6[ZDB-...]
|
||||
0store <-.-> zdb7[ZDB-N]
|
||||
end
|
9
tosort/technology/qsss/product/file_system/img/create_png
Executable file
@@ -0,0 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
for name in ./*.mmd
|
||||
do
|
||||
output=$(basename $name mmd)png
|
||||
echo $output
|
||||
mmdc -i $name -o $output -w 4096 -H 2160 -b transparant
|
||||
echo $name
|
||||
done
|
BIN
tosort/technology/qsss/product/file_system/img/qsss_intro_.jpg
Normal file
After Width: | Height: | Size: 285 KiB |
After Width: | Height: | Size: 238 KiB |
39
tosort/technology/qsss/product/file_system/qss_filesystem.md
Normal file
@@ -0,0 +1,39 @@
|
||||
<!--  -->
|
||||
|
||||

|
||||
|
||||
# Quantum Safe Filesystem
|
||||
|
||||
A redundant filesystem, can store PB's (millions of gigabytes) of information.
|
||||
|
||||
Unique features:
|
||||
|
||||
- Unlimited scalable (many petabytes) filesystem
|
||||
- Quantum Safe:
|
||||
- On the TFGrid, no farmer knows what the data is about
|
||||
- Even a quantum computer cannot decrypt
|
||||
- Data can't be lost
|
||||
- Protection for [datarot](datarot), data will autorepair
|
||||
- Data is kept for ever
|
||||
- Data is dispersed over multiple sites
|
||||
- Sites can go down, data not lost
|
||||
- Up to 10x more efficient than storing on classic storage cloud systems
|
||||
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid, ...)
|
||||
- Compatible with +- all data workloads (not high performance data driven workloads like a database)
|
||||
- Self-healing: when a node or disk lost, storage system can get back to original redundancy level
|
||||
- Helps with compliance to regulations like GDPR (as the hosting facility has no view on what is stored, information is encrypted and incomplete)
|
||||
- Hybrid: can be installed onsite, public, private, ...
|
||||
- Read-write caching on encoding node (the front end)
|
||||
|
||||
|
||||
## Architecture
|
||||
|
||||
By using our filesystem inside a Virtual Machine or Kubernetes the TFGrid user can deploy any storage application on top e.g. Minio for S3 storage, OwnCloud as online fileserver.
|
||||
|
||||

|
||||
|
||||
Any storage workload can be deployed on top of the zstor.
|
||||
|
||||
!!!def alias:quantumsafe_filesystem,planetary_fs,planet_fs,quantumsafe_file_system,zstor,qsfs
|
||||
|
||||
!!!include:qsss_toc
|