Deploying Workloads on Oracle Cloud Infrastructure (OCI) via Terraform - PART 2
Deploying compartments, compute & network resources on OCI via Terraform

Welcome to Part 2 of my series Working with Terraform on Oracle Cloud Infrastructure (OCI)
If you need a refresher, visit Part 1
In Part 1, we have done the following:
- Installed Terraform
- Created .tf scripts
- Executed the workflows, Initialize > Plan > Apply
- Authenticated our OCI tenant with the Terraform provider scripts
In Part 2, let's look at how we can provision compartments & provision compute & network resources within these compartments.

In Part 1, we created the following .tf file which allowed Terraform to authenticate with our OCI tenant.

In Part 2, we will first need to create a compartment to store our resources that we'll be creating later. To do this, we will need 2 .tf files in our working folder.

Creating the Working Directory
Let's begin with creating the working directory to store our files in it. Once, the files are defined, we'll initialize Terraform & put it to work from this directory.
Create a directory called 'tf-compartment' under your root directory.
$ mkdir tf-compartment
Copy the provider.tf file from the tf-provider directory to this directory (tf-compartment). Once you have that in place, let's create the compartment.tf file & define the parameters. Add the following code in your file.
resource "oci_identity_compartment" "tf-compartment" {
compartment_id = "<tenancy-ocid>"
description = "Compartment for Terraform resources."
name = "<your-compartment-name>"
}
Defining Policies on OCI
Let's add a policy on our OCI tenant.
- Navigate to Identity & Security > Policies
- Choose a Compartment if you have one or select the root compartment for now
- Click Create Policy & click Show manual editor
- Paste the following policy & click Create
allow group <the-group-your-username-belongs> to manage compartments in tenancy
Creating the Compartment
Initialize
First, let's initialize Terraform within our working directory, tf-compartment
$ terraform init
You'll get the following output
Plan
Once, we've initialized Terraform, let's plan our deployment.
$ terraform plan
You'll get an output like this:
Apply
Once we're good with our plan, let's proceed with creating the compartment by issuing the following command
$ terraform apply
You'll get an output like these:
Now, let's verify the creation in our OCI tenant
There you go. Our compartment has been successfully created on OCI
Provisioning a Compute Instance in an Existing VCN
Now, let's attempt to provision a compute instance in our compartment, vickterraform2. This compute instance will be provisioned into an existing Virtual Cloud Network, vickvcn1, in the OCI tenant
Generating SSH Keys
I'll be generating SSH keys which will be used to connect to the compute instance which I'll be creating later
$ ssh-keygen -t rsa -N "" -b 2048 -C <your-ssh-key-name> -f <your-ssh-key-name>
This will generate a public & private key pair for you. You would get something like this
Gather Important Information
Before we begin creating the scripts, let's gather the important information that we'll be using in our scripts
- Compartment Name
- Compartment ID
- Subnet ID
- Source ID
- Shape
- Path of your SSH public key
- Path of your SSH private key
Getting the Source ID
The Source ID refers to the ID of the source image of your compute & this depends on what image (Ubuntu, CentOS, Oracle Linux, Windows, etc.) you would like to provision
To get the Source ID:
- Identify your region. Mine is us-ashburn-1
- Refer here: Image Release Notes
- For this deployment, I'll be using an Ubuntu 20.04 image & this is my Source ID to the image according to my region: ocid1.image.oc1.iad.aaaaaaaaos5ofuq26nipxcybznimsjnhyw3jf7mrq2r3kgpsf6zqbtqei25q
Getting the Shape
The shape refers to the size of the compute instance which defines the allocation of CPUs & memory
To get the shape:
- Refer here: VM Standard Shapes
- I'll be using a VM.Standard2.1 shape
Add Policies to OCI
Let's add the following policy in Identity & Security > Identity > Policies under your compartment
allow group <the-group-your-username-belongs> to manage all-resources in compartment <your-compartment-name>
Creating the Scripts
Let's setup a new working directory for this called 'tf-compute'
$ mkdir tf-compute
Copy your 'provider.tf' file into this directory
$ cp ../tf-provider/provider.tf .
Let's also copy the 'availability-domains.tf' file which we created in the initial setup into this directory
cp ../tf-provider/availability-domains.tf .
The 'availability-domains.tf' has the following code
data "oci_identity_availability_domains" "ads" {
compartment_id = "<tenancy-ocid>"
}
Let's also create an 'outputs.tf' file in the 'tf-compute' directory
output "name-of-first-availability-domain" {
value = data.oci_identity_availability_domains.ads.availability_domains[0].name
}
Now, we have the following files in our tf-compute directory
- provider.tf
- availability-domains.tf
- outputs.tf
Let's initialize > plan > apply our Terraform scripts
$ terraform init
$ terraform plan
$ terraform apply
Once you've hit the terraform apply, you'll be provided with an output of your availability domain
Declare the Compute Resource
Now that we're ready to provision the compute instance, let's create a 'compute.tf' file & add the following code into the file
resource "oci_core_instance" "ubuntu_instance" {
# Required
availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
compartment_id = "<compartment-ocid>"
shape = "VM.Standard2.1"
source_details {
source_id = "<source-ocid>"
source_type = "image"
}
# Optional
display_name = "<your-ubuntu-instance-name>"
create_vnic_details {
assign_public_ip = true
subnet_id = "<subnet-ocid>"
}
metadata = {
ssh_authorized_keys = file("<ssh-public-key-path>")
}
preserve_boot_volume = false
}
This is my completed file
Now, let's run our scripts
$ terraform init
$ terraform plan
$ terraform apply
And, we have successfully created our Ubuntu server via Terraform. Also, let's verify this on the OCI console
Let's also try connecting to this server
$ ssh -i <ssh-private-key-path> ubuntu@<your-public-ip-address>
And there you go. We've successfully accessed the Ubuntu server
Now that I've tested the deployment & it's working, I will proceed to delete this compute instance
$ terraform destroy
Simultaneously on the console, the instance is getting terminated
Once the instance is completely terminated, you will get the following output
Creating a New Virtual Cloud Network (VCN)
In this section, I will demonstrate the creation of a new Virtual Cloud Network (VCN) & its associated components such as subnets, gateways, security lists & more

Gather Important Information
Before we begin, let's gather the important information that we'll be using in our scripts
- Compartment Name
- Compartment ID
- Region
Create a Working Directory
Let's create a new working directory for this demo & let's name it 'tf-vcn'
$ mkdir tf-vcn
Copy the 'provider-tf' file from the 'tf-provider' directory into this directory
cp ../tf-provider/provider.tf .
Declaring the VCN Configuration
Now, let's create a file called 'vcn-module.tf' & insert the following code into it
# Source from https://registry.terraform.io/modules/oracle-terraform-modules/vcn/oci/
module "vcn"{
source = "oracle-terraform-modules/vcn/oci"
version = "3.1.0"
# insert the 5 required variables here
# Required Inputs
compartment_id = "<compartment-ocid>"
region = "<region-identifier>"
internet_gateway_route_rules = null
local_peering_gateways = null
nat_gateway_route_rules = null
# Optional Inputs
vcn_name = "vcn-module"
vcn_dns_label = "vcnmodule"
vcn_cidrs = ["10.0.0.0/16"]
create_internet_gateway = true
create_nat_gateway = true
create_service_gateway = true
}
Here's a sample of my code
Let's also create an 'outputs.tf' file & insert the following code to get the information of the VCN once we run the scripts
# Outputs for the vcn module
output "vcn_id" {
description = "OCID of the VCN that is created"
value = module.vcn.vcn_id
}
output "id-for-route-table-that-includes-the-internet-gateway" {
description = "OCID of the internet-route table. This route table has an internet gateway to be used for public subnets"
value = module.vcn.ig_route_id
}
output "nat-gateway-id" {
description = "OCID for NAT gateway"
value = module.vcn.nat_gateway_id
}
output "id-for-for-route-table-that-includes-the-nat-gateway" {
description = "OCID of the nat-route table - This route table has a nat gateway to be used for private subnets. This route table also has a service gateway."
value = module.vcn.nat_route_id
}
Creating the VCN
Now that we have the required scripts configured, let's create the VCN
$ terraform init
$ terraform plan
$ terraform apply
And there you go. We've successfully created the VCN & the gateways
Let's also verify this on the OCI console
Notice that we don't have any subnets in our VCN yet. We'll be creating this later
Customizing the VCN
Before we jump on to create subnets, I would like to define security lists for my subnets
Creating a Security List for the Private Subnet
Let's create a file called 'private-security-list.tf' & add the following code into it
# Source from https://registry.terraform.io/providers/hashicorp/oci/latest/docs/resources/core_security_list
resource "oci_core_security_list" "private-security-list"{
# Required
compartment_id = "<compartment-ocid>"
vcn_id = module.vcn.vcn_id
# Optional
display_name = "security-list-for-private-subnet"
}
Let's also add an egress rule to this security list by adding the following code into our file
egress_security_rules {
stateless = false
destination = "0.0.0.0/0"
destination_type = "CIDR_BLOCK"
protocol = "all"
}
Here's a sample of my code
Also, let's add the following code into the 'outputs.tf' file
# Outputs for private security list
output "private-security-list-name" {
value = oci_core_security_list.private-security-list.display_name
}
output "private-security-list-OCID" {
value = oci_core_security_list.private-security-list.id
}
Once we have created our scripts, let's run the creation of the security list for our private subnet
$ terraform init
$ terraform plan
$ terraform apply
And we've successfully created our security list along with its egress rule
Now that's successful, let's also create ingress rules for our security lists. In the 'private-security-list.tf', let's add the following code into it
ingress_security_rules {
stateless = false
source = "10.0.0.0/16"
source_type = "CIDR_BLOCK"
# Get protocol numbers from https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml TCP is 6
protocol = "6"
tcp_options {
min = 22
max = 22
}
}
ingress_security_rules {
stateless = false
source = "0.0.0.0/0"
source_type = "CIDR_BLOCK"
# Get protocol numbers from https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml ICMP is 1
protocol = "1"
# For ICMP type and code see: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml
icmp_options {
type = 3
code = 4
}
}
ingress_security_rules {
stateless = false
source = "10.0.0.0/16"
source_type = "CIDR_BLOCK"
# Get protocol numbers from https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml ICMP is 1
protocol = "1"
# For ICMP type and code see: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml
icmp_options {
type = 3
}
}
Once again, let's run our scripts
$ terraform init
$ terraform plan
$ terraform apply
Now that it has successfully ran, we are able to see the ingress & egress rules created for the security list for the private subnet
Creating a Security List for the Public Subnet
Now, let's replicate the above operations for our public subnet. Let's copy the file 'private-security-list.tf' to a new file called 'public-security-list.tf' & let's modify the parameters
$ cp private-security-list.tf public-security-list.tf
Let's change the code private-security-list in the resource block to public-security-list & change the display_name from security-list-for-private-subnet to security-list-for-public-subnet
We'll maintain the egress rule. However, for the ingress rule let's change the source = "10.0.0.0/16" to source = "0.0.0.0/0"
Your code should look like this
ingress_security_rules {
stateless = false
source = "0.0.0.0/0"
source_type = "CIDR_BLOCK"
# Get protocol numbers from https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml TCP is 6
protocol = "6"
tcp_options {
min = 22
max = 22
}
}
ingress_security_rules {
stateless = false
source = "0.0.0.0/0"
source_type = "CIDR_BLOCK"
# Get protocol numbers from https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml ICMP is 1
protocol = "1"
# For ICMP type and code see: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml
icmp_options {
type = 3
code = 4
}
}
ingress_security_rules {
stateless = false
source = "10.0.0.0/16"
source_type = "CIDR_BLOCK"
# Get protocol numbers from https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml ICMP is 1
protocol = "1"
# For ICMP type and code see: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml
icmp_options {
type = 3
}
}
Let's also add the following code to the 'outputs.tf' file
# Outputs for public security list
output "public-security-list-name" {
value = oci_core_security_list.public-security-list.display_name
}
output "public-security-list-OCID" {
value = oci_core_security_list.public-security-list.id
}
Now, let's run our scripts
$ terraform init
$ terraform plan
$ terraform apply
Now, we are able to see on the console that the security list & its ingress & egress rules have been created
Creating the Private Subnet
We are not done yet as we have not created the subnets for our VCN. Let's start by creating a private subnet
Let's create a file called 'private-subnet.tf' & add the following code to it
# Source from https://registry.terraform.io/providers/hashicorp/oci/latest/docs/resources/core_subnet
resource "oci_core_subnet" "vcn-private-subnet"{
# Required
compartment_id = "<compartment-ocid>"
vcn_id = module.vcn.vcn_id
cidr_block = "10.0.1.0/24"
# Optional
# Caution: For the route table id, use module.vcn.nat_route_id.
# Do not use module.vcn.nat_gateway_id, because it is the OCID for the gateway and not the route table.
route_table_id = module.vcn.nat_route_id
security_list_ids = [oci_core_security_list.private-security-list.id]
display_name = "private-subnet"
}
Let's also add the following to the 'outputs.tf' file
# Outputs for private subnet
output "private-subnet-name" {
value = oci_core_subnet.vcn-private-subnet.display_name
}
output "private-subnet-OCID" {
value = oci_core_subnet.vcn-private-subnet.id
}
Now, let's run our scripts
$ terraform init
$ terraform plan
$ terraform apply
And there you go. We've created our private subnet with a CIDR block of 10.0.1.0/24
Creating the Public Subnet
Let's do the above for the public subnet. To begin, let's create a file called 'public-subnet.tf' & add the following code
# Source from https://registry.terraform.io/providers/hashicorp/oci/latest/docs/resources/core_subnet
resource "oci_core_subnet" "vcn-public-subnet"{
# Required
compartment_id = "<compartment-ocid>"
vcn_id = module.vcn.vcn_id
cidr_block = "10.0.0.0/24"
# Optional
route_table_id = module.vcn.ig_route_id
security_list_ids = [oci_core_security_list.public-security-list.id]
display_name = "public-subnet"
}
And, add the following code to the 'outputs.tf' file
# Outputs for public subnet
output "public-subnet-name" {
value = oci_core_subnet.vcn-public-subnet.display_name
}
output "public-subnet-OCID" {
value = oci_core_subnet.vcn-public-subnet.id
}
Now, let's run our scripts
$ terraform init
$ terraform plan
$ terraform apply
And now, we have successfully created our public subnet with a CIDR block of 10.0.0.0/24
Summary
This sums Part 2 of this blog. In Part 2, we've created the following
- A compartment
- A Ubuntu 20.04 compute instance
- A Virtual Cloud Network (VCN)
- Security List for Private Subnet with Ingress & Egress Rules
- Security List for Public Subnet with Ingress & Egress Rules
- A Private Subnet
- A Public Subnet
In Part 3, I will be covering how we can incorporate the above in a single Terraform script & provision an infrastructure on OCI. Stay tuned.




