Terraform configurations can include variables to make your configuration more dynamic and flexible.
Prerequisites
After following the earlier tutorials, you will have a directory named learn-terraform-aws-instance with the following configuration in a file called main.tf.
Ensure that your configuration matches this, and that you have run terraform init in the learn-terraform-aws-instance directory.
Set the instance name with a variable
The current configuration includes a number of hard-coded values. Terraform variables allow you to write configuration that is flexible and easier to re-use.
Add a variable to define the instance name.
Create a new file called variables.tf with a block defining a new instance_name variable.
variable"instance_name" {description="Value of the Name tag for the EC2 instance"type= stringdefault="ExampleAppServerInstance"}
Note
Terraform loads all files in the current directory ending in .tf, so you can name your configuration files however you choose.
In main.tf, update the aws_instance resource block to use the new variable. The instance_name variable block will default to its default value ("ExampleAppServerInstance") unless you declare a different value.
resource "aws_instance" "app_server" { ami = "ami-08d70e59c07c61a3a" instance_type = "t2.micro" tags = {- Name = "ExampleAppServerInstance"+ Name = var.instance_name }}
Apply your configuration
Apply the configuration. Respond to the confirmation prompt with a yes.
$terraform applyTerraform used the selected providers to generate the following execution plan.Resource actions are indicated with the following symbols: + createTerraform will perform the following actions:#aws_instance.app_server will be created + resource "aws_instance" "app_server" { + ami = "ami-08d70e59c07c61a3a" + arn = (known after apply)##...Plan: 1 to add, 0 to change, 0 to destroy.Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yesaws_instance.app_server: Creating...aws_instance.app_server: Still creating... [10s elapsed]aws_instance.app_server: Still creating... [20s elapsed]aws_instance.app_server: Still creating... [30s elapsed]aws_instance.app_server: Still creating... [40s elapsed]aws_instance.app_server: Creation complete after 50s [id=i-0bf954919ed765de1]Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Now apply the configuration again, this time overriding the default instance name by passing in a variable using the -var flag. Terraform will update the instance's Name tag with the new name. Respond to the confirmation prompt with yes.
$terraform apply -var "instance_name=YetAnotherName"aws_instance.app_server: Refreshing state... [id=i-0bf954919ed765de1]Terraform used the selected providers to generate the following execution plan.Resource actions are indicated with the following symbols: ~ update in-placeTerraform will perform the following actions:#aws_instance.app_server will be updated in-place ~ resource "aws_instance" "app_server" { id = "i-0bf954919ed765de1" ~ tags = { ~ "Name" = "ExampleAppServerInstance" -> "YetAnotherName" }#(26 unchanged attributes hidden)#(4 unchanged blocks hidden) }Plan: 0 to add, 1 to change, 0 to destroy.Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yesaws_instance.app_server: Modifying... [id=i-0bf954919ed765de1]aws_instance.app_server: Modifications complete after 7s [id=i-0bf954919ed765de1]Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Setting variables via the command-line will not save their values. Terraform supports many ways to use and set variables so you can avoid having to enter them repeatedly as you execute commands. To learn more, follow our in-depth tutorial, Customize Terraform Configuration with Variables.
Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-prem resources safely and efficiently. HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features. How does Terraform work? Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API. HashiCorp and the Terraform community have already written thousands of providers to manage many different types of resources and services.
Understanding the MoSCoW prioritization | How to implement it into your project Projects, irrespective of their size or complexity, often juggle numerous tasks and requirements. In such a scenario, MoSCoW prioritization becomes essential to ensure the successful completion of the project. Besides, it is also a very effective way to manage project priorities. With a straightforward and adaptable approach, the MoSCoW method helps manage stakeholder expectations and improve project outcomes. What is the MoSCoW prioritization technique? The MoSCoW method is a prioritization matrix widely used in project management and software development. The term MoSCoW is an acronym that stands for Must have , Should have , Could have , and Won’t have , each denoting a level of priority. Here is the breakdown of the MoSCoW method: Must have: These are critical requirements that the project cannot be completed without them. If these are not fulfilled, the project is con
Using containers to host applications and other processes has gone mainstream. As more and more workload is moved into containers, management systems are needed to handle the demands of containerized applications at scale. One of the most popular options for managing container-based workload is Kubernetes . Kubernetes combines container management automation with an extensible API to create a cloud-native application management powerhouse. At its core, Kubernetes manages the placement of pods, which can consist of one or more containers, on a Kubernetes cluster node. Additionally, if one of these pods crashes, Kubernetes can create a new instance of it. If a cluster node is removed, Kubernetes can move any affected workload to a different node in the cluster. On top of that, Kubernetes pods can be scaled to provide more or less throughput to meet scale demands and these scale operations can be triggered manually or automatically using Kuberentes horizontal pod auto-scaling. Finally,
Comments
Post a Comment