This guide is only for V2 assemblers being deployed with a virtual machine in Microsoft Azure by using Terraform, an infrastructure as code (IaC) tool. The V1 assembler is no longer supported as of June 30, 2024.
Each assembler you created must be deployed via a virtual machine, and then you can add your technology as a security device in Workbench to complete the full integration. For more information about the Expel Assembler or how it works, see the About the Expel Assembler guide.
Prerequisites
- You must have completed all of the steps in Add a New Assembler for each assembler you wish to deploy.
- You must extract the Fedora CoreOS image file you downloaded.
- You will need to upload the .vhd file, not the compressed .xz file, to use Terraform.
- You must verify your network security group (for your firewall configuration) is in a resource group, and that you know which group it is.
- You will need to place the storage account, blob storage container, image, and virtual machine from this guide in that same resource group.
- You must have Terraform installed.
Quick Links
Terraform lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. These instructions assume you have working knowledge of Terraform; if you need additional help or context, refer to the Terraform Documentation.
Setup includes the following steps (select any step for detailed instructions):
- Download the Ignition File
- Install and Log In via the Azure CLI
- Set Up the Terraform Config File
- Prepare the Resource Definitions File
- Create an Azure Storage Container
- Upload the Ignition File and CoreOS File
- Create a Managed Image in Azure
- Create a Shared Access Signature (SAS) for the Ignition File Blob
- Create the Custom Data via Terraform’s Template_File Source
- Configure Your VM's Network Infrastructure
- Configure and Spin Up the Virtual Machine
- Verify a “Connected” Status in Workbench
To see a full code example, go to the Reference section.
Step 1: Download the Ignition File
The ignition file enables the virtual machine to read a configuration file, and to provision the Fedora CoreOS system based on the contents of that file. You will use this file when you configure the virtual machine in GCP.
- Log in to Workbench.
- In the side menu, navigate to Organization Settings > Assemblers.
- Find the assembler you created, leave the file format as JSON, and select Download the CoreOS Ignition File. This action will download a JSON file that you will need in the next section. You may choose a different file format if you like, but the JSON format is recommended for this type of assembler.
- Move your ignition file to a remote, secure location such as Google's Cloud Storage. The contents of the ignition file will be stored in plaintext (unencrypted) wherever your Terraform state files are located. Some guidelines:
- Do not store your ignition file in a git repository. The file contains sensitive information and git is not a suitable place for this type of data.
- Be sure to lock down access to the storage location. Only people who need access to the ignition file (and to the Terraform state files if using Terraform Remote State) should have access to the storage location.
- Repeat this process for any additional assemblers. Important: you must keep track of the files, and which came from which assembler, because each assembler has its own unique ignition file.
Step 2: Install and Log In via the Azure CLI
You must authenticate Terraform with your Azure account. The first step is to get the CLI and log in to your Azure account.
- If you have not yet installed the Azure CLI tool, use one of the following links for instructions:
- Log into Azure via the CLI. This command will open a browser window and have you log in there (look for a successful login message in your browser, then go back to Terminal):
az login --scope https://graph.microsoft.com//.default
- Terminal will retrieve and return your subscription and tenant information. Follow the instructions to select your subscription and tenant.
Step 3: Set Up the Terraform Config File
You need to set up the config file so that Terraform uses the Azure provider to configure your infrastructure. See the Azure documentation if you need additional help with this step.
- Create a Terraform config file, if you do not have one already. An example config file name could be "terraform.tf".
- Add the following configuration to your Terraform config file:
# We strongly recommend using the required_providers block to set the # Azure provider source and the version being used terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "=3.0.0" } } } provider "azurerm" { features {} }
Step 4: Prepare the Resource Definitions File
You need a resource definitions file to hold your Terraform resources, SAS Token, custom data, network, internal subnet, and internal network interface.
- Create a resource definitions file, if you do not have one already. An example file name could be "assembler.tf".
- Do one of the following:
- If you do not have an existing storage account and blob storage container, and need to create them, continue to Create an Azure Storage Container.
- If you already have an existing storage account and blob storage container you want to use, and it's within the resource group that contains your network security group, skip to Upload the Ignition File and CoreOS File.
Step 5: Create an Azure Storage Container
If you already have an existing storage account and blob storage container you want to use within your resource group, you should skip this step and go to Step 6.
- To create these two resources quickly, add the following block to your resource definitions file.
- Make sure to use the resource group that contains your network security group and firewall configuration for YOUR_RESOURCE_GROUP_NAME.
- Make sure your location matches the location of your virtual machine. In this example, we have set the value to "East US".
- You may use any name you like for your storage account (YOUR_STORAGE_ACCOUNT_NAME) and blob storage container (YOUR_STORAGE_CONTAINER_NAME).
- If you are deploying more than one assembler, you may use the same resources as long as all assemblers are in the same location. If the location of each assembler differs, you must create a new storage account and blob storage container for each location.
resource "azurerm_resource_group" "assembler-resource-group" { name = "YOUR_RESOURCE_GROUP_NAME" location = "East US" } resource "azurerm_storage_account" "assembler-storage-account" { name = "YOUR_STORAGE_ACCOUNT_NAME" resource_group_name = azurerm_resource_group.assembler-resource-group.name location = azurerm_resource_group.assembler-resource-group.location account_tier = "Standard" account_replication_type = "LRS" } resource "azurerm_storage_container" "assembler-storage-container" { name = "YOUR_STORAGE_CONTAINER_NAME" storage_account_name = azurerm_storage_account.assembler-storage-account.name container_access_type = "private"
- Deploy your new Terraform resources.
- Run terraform init to initialize the working directory.
- Run terraform plan and review the changes.
- If the plan looks right, run terraform apply and confirm the actions.
Step 6: Upload the Ignition File and CoreOS File
Your CoreOs image file and ignition file (stored in its remote location) can now be uploaded to your new Azure storage container. Remember that you must have already extracted your CoreOS image file, so that you have a .vhd file ready for upload (not a compressed .xz file).
Note
If you prefer to use the Azure CLI to complete this step, refer to the Microsoft article Upload a VHD to Azure or copy a managed disk to another region - Azure CLI for instructions.
To upload the .vhd CoreOS image file through the Azure Portal:
- Log in to your Azure portal via a browser.
- Select Storage Accounts.
- Select the storage account you created in the previous section, or that you already have and want to use.
- Select Upload.
- In the Upload blob window:
- Drag-and-drop or browse for the .vhd CoreOS image file.
- Find and select the container you created in the previous section, or that you already have and want to use.
- Leave all Advanced settings as is.
- Select Upload.
- Drag and drop or browse for the ignition file.
- Verify you are still in the same container.
- Leave all Advanced settings as is.
- Select Upload.
- Repeat this process for any additional assemblers that are using that container.
- Close the window.
Step 7: Create a Managed Image in Azure
You must create a managed image that can be used for the Linux virtual machine.
- Still in your Azure portal, search for or select Images.
- Select Create.
- On the Create an image screen:
- Subscription - select the subscription you want to use.
- Resource group - select the resource group that contains your network security group; this is the same resource group that you chose to hold your storage container.
- Name - enter a name for the image, such as "assemblercoreosimage".
- Region - select the region for your resource group.
- Zone resiliency - leave unchecked.
- OS type - select Linux.
- VM generation - select Gen 1.
- Storage blob - use the Browse link to select the storage account you created in Step 5, then select the storage container, then select the .vhd CoreOS image file, then choose Select.
- Account type - select Standard SSD.
- Host caching - select Read/write.
- Key management - select Platform-managed key.
- Data disk - do not add a data disk.
- Select Review + Create.
- Review your configuration if desired, and select Create.
Before you move to the next section, you must obtain the URI for the image file and save it for later use (you will need the URI in Step 11). To do so:
- From your list of images, select the image you just uploaded.
- While viewing the image's Overview page, copy the full URL out of your browser and into a text editor.
- Select the portion of the URL from “/subscriptions” all the way to the name of the image, and leave out “/overview” at the end. Save this portion as your image file's URI.
Example full URL (image name is "mycoreosimage"): https://portal.azure.com/lab.onmicrosoft.com/resource/subscriptions/1abc2d34-e5f6-789g-h0i1-j2k345l6789m/resourceGroups/coresosresource/providers/Microsoft.Compute/images/mycoreosimage/overview
Example resulting URI:
/subscriptions/1abc2d34-e5f6-789g-h0i1-j2k345l6789m/resourceGroups/coresosresource/providers/Microsoft.Compute/images/mycoreosimage
Step 8: Create a Shared Access Signature (SAS) for the Ignition File Blob
To give CoreOS the ability to remotely access the ignition file from your Azure storage container, you must provide a link to the file. If you are deploying more than one assembler, this step will need to be repeated for each ignition file.
Note
This step creates a Shared Access Signature (SAS Token) with an expiration of one hour. Make sure to complete the rest of this onboarding guide before it expires.
Add the following block to your resource definitions file to create your SAS.
- If you are using a pre-existing storage account that is not hosted by Terraform, be sure to update the primary_connection_string attribute accordingly.
- If you need help finding the string in your Azure portal, follow the steps in this Microsoft Help Forum topic.
data "azurerm_storage_account_sas" "assembler-container" { connection_string = azurerm_storage_account.assembler-storage-account.primary_connection_string https_only = true start = timestamp() expiry = timeadd(timestamp(), "1h") signed_version = "2019-10-10" resource_types { service = false container = true object = true } services { blob = true queue = false table = false file = true } permissions { read = true write = false delete = false list = false add = false create = false update = false process = false } }
Step 9: Create the Custom Data via Terraform’s Template_File Source
This step creates the file used in a later step’s custom_data resource attribute. If you are deploying more than one assembler, this step will need to be repeated for each assembler's ignition file.
Add the following block to your resource definitions file to create your custom data.
- Be sure to replace NAME-OF-BLOB with the file name of the storage blob for your ignition file.
data "template_file" "custom_data" { template = jsonencode({ ignition = { config = { replace = { source = "${azurerm_storage_account.assembler-storage-account.primary_blob_endpoint}${azurerm_storage_container.assembler-storage-container.name}/NAME-OF-BLOB${data.azurerm_storage_account_sas.assembler-container.sas}" } }, version = "3.4.0" } }) }
Step 10: Configure Your VM's Network Infrastructure
If you already have an existing network infrastructure to place your Assembler's virtual machine within, you may skip this section and go to Step 11.
This Terraform block creates a network, an internal subnet, and an internal network interface for use by the Assembler's virtual machine. Add it to your resource definitions file and be sure to specify your own:
- Network name (YOUR_NETWORK_NAME)
- Internal subnet name (YOUR_SUBNET_NAME)
- Internal network interface name (YOUR_NETWORK_INTERFACE_NAME)
resource "azurerm_virtual_network" "assembler-network" { name = "YOUR_NETWORK_NAME" address_space = ["10.0.0.0/16"] location = azurerm_resource_group.assembler-resource-group.location resource_group_name = azurerm_resource_group.assembler-resource-group.name } resource "azurerm_subnet" "assembler-subnet" { name = "YOUR_SUBNET_NAME" resource_group_name = azurerm_resource_group.assembler-resource-group.name virtual_network_name = azurerm_virtual_network.assembler-network.name address_prefixes = ["10.0.2.0/24"] } resource "azurerm_network_interface" "assembler-network-interface" { name = "YOUR_NETWORK_INTERFACE_NAME" location = azurerm_resource_group.assembler-resource-group.location resource_group_name = azurerm_resource_group.assembler-resource-group.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.assembler-subnet.id private_ip_address_allocation = "Dynamic" } }
Step 11: Configure and Spin Up the Virtual Machine
Before you begin, make sure you have your URI available (Step 7). If you are deploying more than one assembler, repeat this process for each assembler and be sure to use a unique name for each one.
A few notes about this block:
- The size is “Standard_D2_v31”, which has the minimum CPU and RAM requirements for an assembler.
- The disk_size_gb is 20, which indicates the 20GB minimum disk size required for an assembler.
- The admin_username, username, and admin_ssh_key have been filled in for you. The Azure Terraform Provider requires values in these fields, but they will be overwritten by the Assembler bootup sequence.
To configure your virtual machine:
- Add the following block to your resource definitions file. This will instruct the azurerm_linux_virtual_machine resource to configure your virtual machine with the minimum requirements. Make sure to:
- Use a unique name for each assembler. If you are just deploying one assembler, you can leave the name value as "assembler".
- Add your image URI (from Step 7) as the source_image_id.
resource "azurerm_linux_virtual_machine" "assembler" { name = "assembler" resource_group_name = azurerm_resource_group.assembler-resource-group.name location = azurerm_resource_group.assembler-resource-group.location size = "Standard_D2_v31" # admin_username is required. However, the assembler installation replaces all users on the machine admin_username = "customer" network_interface_ids = [ azurerm_network_interface.assembler-network-interface.id, ] custom_data = base64encode(data.template_file.custom_data.rendered) # admin_ssh_key is required. However, the assembler installation replaces all ssh keys on the machine. admin_ssh_key { username = "customer" public_key = "AAA123BBB456CCC789" } os_disk { caching = "ReadWrite" storage_account_type = "Standard_LRS" disk_size_gb = 20 } source_image_id = "YOUR_IMAGE_URI" }
- Deploy your Terraform resources.
- Run terraform init to initialize the working directory.
- Run terraform plan and review the changes.
- If the plan looks right, run terraform apply and confirm the actions.
Step 12: Verify a “Connected” Status in Workbench
It can take 10 to 15 minutes for the assembler’s status to update in Workbench.
- Log in to Workbench.
- In the side menu, navigate to Organization Settings > Assemblers (or, refresh the page if you never logged out).
- Find your newly created assembler(s) and verify that the status has changed from “Not Yet Connected” to “Connected.”
- If the status has not updated yet, make sure you have waited at least 15 minutes, then refresh the page and check again.
Troubleshooting
If your assembler is still not showing as “Connected” after 15 minutes:
- Make sure your chosen connection has the proper firewall configurations to allow our outbound ports.
- Make sure your config file includes the correct region (Step 3).
- Make sure your ignition file is at the path specified, and that you are referencing the correct ignition file for your assembler (Step 1).
- Make sure your chosen machine’s size meets the required minimums (2 virtual CPUs, 8 GB RAM, and 20 GB disk space).
- Make sure the login credentials you obtained for the assembler (Step 1) are for a user who has admin permissions in Workbench.
- If Boot Diagnostics are available on the machine, go to the assembler’s virtual machine page in the Azure Portal and select Help. Then select Boot Diagnostics to see a screenshot of the serial console. Or, connect to the serial console by selecting Serial Console and entering your assembler’s customer credentials (this requires Workbench admin access). These places may help diagnose the problem.
If all firewall, config file, and resource definitions settings are correct and you are still unable to connect the assembler, contact support for help.
Reference
Full Code Example
terraform.tf (config file)
# We strongly recommend using the required_providers block to set the # Azure provider source and the version being used terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "=3.0.0" } } } provider "azurerm" { features {} }
assembler.tf (resource definitions)
resource "azurerm_resource_group" "assembler-resource-group" { name = "YOUR_RESOURCE-GROUP-NAME" location = "East US" } resource "azurerm_storage_account" "assembler-storage-account" { name = "YOUR_STORAGE_ACCOUNT_NAME" resource_group_name = azurerm_resource_group.assembler-resource-group.name location = azurerm_resource_group.assembler-resource-group.location account_tier = "Standard" account_replication_type = "LRS" } resource "azurerm_storage_container" "assembler-storage-container" { name = "YOUR_CONTAINER_NAME" storage_account_name = azurerm_storage_account.assembler-storage-account.name container_access_type = "private" } data "azurerm_storage_account_sas" "assembler-container" { connection_string = azurerm_storage_account.assembler-storage-account.primary_connection_string https_only = true start = timestamp() expiry = timeadd(timestamp(), "1h") signed_version = "2019-10-10" resource_types { service = false container = true object = true } services { blob = true queue = false table = false file = true } permissions { read = true write = false delete = false list = false add = false create = false update = false process = false } } data "template_file" "custom_data" { template = jsonencode({ ignition = { config = { replace = { source = "${azurerm_storage_account.assembler-storage-account.primary_blob_endpoint}${azurerm_storage_container.assembler-storage-container.name}/NAME-OF-BLOB${data.azurerm_storage_account_sas.assembler-container.sas}" } }, version = "3.4.0" } }) } resource "azurerm_virtual_network" "assembler-network" { name = "YOUR_NETWORK_NAME" address_space = ["10.0.0.0/16"] location = azurerm_resource_group.assembler-resource-group.location resource_group_name = azurerm_resource_group.assembler-resource-group.name } resource "azurerm_subnet" "assembler-subnet" { name = "YOUR_SUBNET_NAME" resource_group_name = azurerm_resource_group.assembler-resource-group.name virtual_network_name = azurerm_virtual_network.assembler-network.name address_prefixes = ["10.0.2.0/24"] } resource "azurerm_network_interface" "assembler-network-interface" { name = "YOUR_NETWORK_INTERFACE_NAME" location = azurerm_resource_group.assembler-resource-group.location resource_group_name = azurerm_resource_group.assembler-resource-group.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.assembler-subnet.id private_ip_address_allocation = "Dynamic" } } resource "azurerm_linux_virtual_machine" "assembler" { name = "assembler" resource_group_name = azurerm_resource_group.assembler-resource-group.name location = azurerm_resource_group.assembler-resource-group.location size = "Standard_D2_v31" # admin_username is required. However, the assembler installation replaces all users on the machine admin_username = "customer" network_interface_ids = [ azurerm_network_interface.assembler-network-interface.id, ] custom_data = base64encode(data.template_file.custom_data.rendered) # admin_ssh_key is required. However, the assembler installation replaces all ssh keys on the machine. admin_ssh_key { username = "customer" public_key = "AAA123BBB456CCC789" } os_disk { caching = "ReadWrite" storage_account_type = "Standard_LRS" disk_size_gb = 20 } source_image_id = "YOUR_IMAGE_URI" }