- If the user decided to choose [scheduler](terraform_scheduler.md) to find a node for him, then he will use the node returned from the scheduler as the example above
## Using Grid Explorer
- If not, the user can still specify the node directly if he wants using the grid explorer to find a node that matches his requirements
## Describing the overlay network for the project
```terraform
resource "grid_network" "net1" {
nodes = [grid_scheduler.sched.nodes["node1"]]
ip_range = "10.1.0.0/16"
name = "network"
description = "some network"
add_wg_access = true
}
```
We tell terraform we will have a network one node `having the node ID returned from the scheduler` using the IP Range `10.1.0.0/16` and add wireguard access for this network
-`node = grid_scheduler.sched.nodes["node1"]` means this deployment will happen on node returned from the scheduler. Otherwise the user can specify the node as `node = 2` and in this case the choice of the node is completely up to the user at this point. They need to do the capacity planning. Check the [Node Finder](dashboard@@node_finder) to know which nodes fits your deployment criteria.
-`network_name` which network to deploy our project on, and here we choose the `name` of network `net1`
-`ip_range` here we [lookup](https://www.terraform.io/docs/language/functions/lookup.html) the iprange of node `2` and initially load it with `""`
> Advannced note: Direct map access fails during the planning if the key doesn't exist which happens in cases like adding a node to the network and a new deployment on this node. So it's replaced with this to make a default empty value to pass the planning validation and it's validated anyway inside the plugin.
-`cpu` and `memory` are used to define the cpu and memory
-`publicip` is usued to define if it requires a public IP or not
-`entrypoint` is used define the entrypoint which in most of the cases in `/sbin/zinit init`, but in case of flists based on vms it can be specific to each flist
-`env_vars` are used to define te environment variables, in this example we define `SSH_KEY` to authorize me accessing the machine
Here we say we will have this deployment on node with `twin ID 2` using the overlay network defined from before `grid_network.net1.name` and use the ip range allocated to that specific node `2`
The file describes only the desired state which is `a deployment of two VMs and their specifications in terms of cpu and memory, and some environment variables e.g sshkey to ssh into the machine`
## Reference
A complete list of VM workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vms).