amis eureka_如何构建量身定制的amis来升级您的基础架构

本文翻译自《如何构建量身定制的amis来升级您的基础架构》,介绍了利用amis eureka提升基础设施效率和性能的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

amis eureka

There comes a time in everyone’s infrastructure journey that you have to build your own AWS AMIs. Instead of having to configure your instances every time one is provisioned, you create an image that is preloaded with:

在每个人的基础架构之旅中,都有一段时间必须构建自己的AWS AMI。 无需在每次配置实例时都配置实例,而是创建一个预加载了以下图像的映像:

  • Needed applications/tools/scripts

    所需的应用程序/工具/脚本
  • Patched with all the latest packages

    修补了所有最新软件包
  • Security hardening of the OS

    操作系统的安全加固
  • Information regarding your environment so it can come online and instantly start working.

    有关您的环境的信息,以便它可以联机并立即开始工作。
  • etc…

    等等…

I don’t think I have to sell anyone too hard on why all of the above is nice to have. It also saves you time and effort since instead of doing this PER instance, it is only done once when the image itself is created.

我不认为我要向所有人推销为什么上述所有东西都很好。 这也节省了您的时间和精力,因为创建映像本身仅执行一次,而不是执行此PER实例。

Another benefit of custom images is the concept of “pets” vs “sheep”. When your images are preloaded with everything you need, you can destroy and rebuild your servers as desired. Your servers become sheep which can be swapped out as needed. This is compared to having servers that need to be reconfigured each time they are built. Instead of destroying the servers/pets as needed, you have to take care of them and all the difficulties that come with that (e.g. constantly patching, updating, hardening for security reasons, etc…)

自定义图像的另一个好处是“宠物”与“绵羊”的概念。 当映像中预装了您所需的所有内容时,您可以根据需要销毁并重建服务器。 您的服务器变成了绵羊,可以根据需要交换出去。 与此相比,每次构建服务器时都需要重新配置服务器。 您不必照顾服务器/宠物,而要照顾它们以及随之而来的所有困难(例如,出于安全原因而不断打补丁,更新,加固等)。

In all fairness, you can reduce the pain of vanilla images by using something like Puppet, Nomad, or Ansible. The problem with this approach is that these tools run at build time. As a result of that, when a server boots up, these tools will need 10–20 minutes to configure it. Compared to a custom image that is instantly ready or takes a minute or two to be usable.

公平地说,您可以使用诸如Puppet,Nomad或Ansible之类的东西来减轻香草图像的痛苦。 这种方法的问题在于这些工具在构建时运行。 结果,当服务器启动时,这些工具将需要10到20分钟来配置它。 与立即准备好或需要一两分钟才能使用的自定义图像相比。

Now that we understand some of the problems and benefits, we need to design how we want to build our custom images.

现在我们了解了一些问题和好处,我们需要设计如何构建自定义图像。

When building custom images, much like any automation, we want to make sure we are making smart decisions. We need to produce something that is:

在构建自定义图像时,就像任何自动化一样,我们要确保做出明智的决定。 我们需要产生以下内容:

  • Easy to use

    易于使用
  • Able to be automated

    能够自动化
  • Scalable

    可扩展
  • Easy to update

    易于更新
  • Secure

    安全
  • DRY — Reduce/Remove the need to copy and paste code

    DRY-减少/消除复制和粘贴代码的需要

For our custom images, this is the outline of how this looks:

对于我们的自定义图像,这是外观的轮廓:

  • We will create a base image that has all the general dependencies needed by all server types (e.g. patching, monitoring tools, OS hardening).

    我们将创建一个具有所有服务器类型所需的所有常规依赖项的基本映像(例如,修补程序,监视工具,操作系统加固)。
  • All other images will use the base images as a starting point and then tack on any additional tools/applications/configurations needed for the function-specific image (e.g. bastion servers, Kubernetes nodes).

    所有其他映像都将使用基础映像作为起点,然后添加功能特定映像所需的任何其他工具/应用程序/配置(例如堡垒服务器,Kubernetes节点)。
  • All images/AMIs will be hosted in a central AWS account. From the central account, the images will be shared with our other AWS accounts. This allows us to manage the images from a single account, cuts down on the provisioning time, helps keep us DRY, and overall makes this setup easier to use/manage.

    所有图像/ AMI将托管在一个中央AWS账户中。 从中央帐户,图像将与我们的其他AWS帐户共享。 这使我们可以从一个帐户管理映像,减少配置时间,使我们保持DRY状态,并且总体上使此设置更易于使用/管理。
  • Packer will be used for creating the image

    Packer将用于创建图像
  • Ansible will be used for configuring the image

    Ansible将用于配置映像
  • Terraform will bootstrap the instances created from the image at build time.

    Terraform将在构建时引导从映像创建的实例。
  • The images will not contain any secrets/API-keys/passwords. These will be provided by Terraform as part of the bootstrapping process. This is more secure and also makes the images more extensible.

    这些图像将不包含任何机密/ API密钥/密码。 这些将由Terraform作为引导过程的一部分提供。 这样更安全,也使图像更可扩展。

Packer, Ansible, and Terraform will be explained in their own dedicated sections. I will also include code or references which can be used to get you started with custom images in the “Let’s Get Building!” section.

Packer,Ansible和Terraform将在其专用章节中进行说明。 我还将在“让我们开始建设!”中包含可用于帮助您开始使用自定义图像的代码或参考。 部分。

封隔器 (Packer)

For those who are not familiar, Packer is part of the Hashicorp family(so you know it is going to be amazing). It automates the creation of machine images by:

对于那些不熟悉的人来说, Packer是Hashicorp家族的一员(所以您会发现它会很棒的)。 它通过以下方式自动创建机器映像:

  • creating a temporary instance

    创建一个临时实例
  • configuring the temporary instance according to the instructions you provided

    根据您提供的说明配置临时实例
  • creating an image from the temporary instance

    从临时实例创建映像
  • terminating the temporary instance

    终止临时实例

Note: Ever since Packer v1.5.0, support for HCL was added into Packer. All of the examples below will be in HCL. If you have an older version of Packer and do not want to upgrade, you will need to convert the examples into the relevant JSON. This doc can help you transition from JSON to HCL.

注意:从Packer v1.5.0开始,对HCL的支持已添加到Packer中。 以下所有示例均在HCL中。 如果您具有较旧的Packer版本,并且不想升级,则需要将示例转换为相关的JSON。 此文档可以帮助您从JSON过渡到HCL

Ansible (Ansible)

Ansible is a server configuration tool. Think of tools like Puppet, Puppet Bolt, Chef, Salt, etc… and you’ll be in the same ballpark. It allows you to configure your servers using code. One of the bullet points in the “Packer” section was:

Ansible是服务器配置工具。 考虑一下诸如Puppet,Puppet Bolt,Chef,Salt等工具,您将处于同一状况。 它允许您使用代码配置服务器。 “包装”部分的要点之一是:

configuring the temporary instance according to the instructions you provided

根据您提供的说明配置临时实例

When configuring your images through Packer, you make use of what are called provisioners. Taking a look at Packer’s documentation, you can see there are many options:

通过Packer配置映像时,可以使用所谓的预配器。 查看Packer的文档,您可以看到有很多选择:

Image for post
Packer Provisioners
打包机供应商

We are opting to use “Ansible” since we already have an Ansible codebase for configuring/maintaining our existing “Pets”. If you are already using Chef, Puppet, Salt, etc…, I would highly encourage you to use the relevant Provisioner. This will allow you to reuse existing code, and make use of the skillsets you already have.

我们选择使用“ Ansible”,因为我们已经具有用于配置/维护现有“ Pets”的Ansible代码库。 如果您已经在使用Chef,Puppet,Salt等…,我强烈建议您使用相关的Provisioner。 这将使您可以重用现有代码,并利用已有的技能。

The only caveat is that I would recommend avoiding any provisioners like “Shell” or “Windows Shell” for anything extensive. While it might be tempting to write a quick bash script or equivalent, it just simply is not scalable and eventually gets too complicated to manage.

唯一的警告是,我建议避免使用任何诸如“ Shell”或“ Windows Shell”之类的资源调配器来进行广泛的扩展。 尽管编写快速的bash脚本或等效脚本可能很诱人,但它只是不可扩展的,最终变得太复杂以致于无法管理。

As an example, I was able to harden my images by using this Ansible role. Imagine how many lines of bash it would have taken to manage the same thing? Additionally, even if you wrote the bash to match this role, you would have to maintain all those scripts to make sure they kept up with best practices. Using the Ansible role only took 5 minutes and is maintained by the vendor. The bash script would have taken hours, and then added additional overhead since I would have to maintain them.

举例来说,我可以使用Ansible角色强化我的图像。 想象一下,要管理同一件事需要花费多少行bash? 此外,即使您编写了bash来匹配该角色,您也必须维护所有这些脚本,以确保它们与最佳实践保持一致。 使用Ansible角色仅花费了5分钟,并且由供应商维护。 bash脚本将花费数小时,然后增加了额外的开销,因为我必须维护它们。

As you’ll see in the sample code below, we also make use of “Inspec”. This provisioner allows us to validate that our images are properly hardened. Adding validation testing to your automation is always a good thing. In this instance, your Secops team will be appreciative.

正如您将在下面的示例代码中看到的那样,我们还利用了“ Inspec”。 此配置程序使我们可以验证图像是否已正确硬化。 在您的自动化中添加验证测试始终是一件好事。 在这种情况下,您的思科普团队将不胜感激。

地貌 (Terraform)

As mentioned earlier, we are not going to store any secrets/passwords/API-keys in our images. This is desirable for a few reasons but namely security and configure-ability. With this requirement though, we need “something” to provide the secrets/configs to the servers once they are provisioned. Since Terraform is responsible for standing up our servers, it makes sense to use it for providing the necessary information at build time.

如前所述,我们不会在图像中存储任何秘密/密码/ API密钥。 由于一些原因,这是理想的,但是安全性和可配置性。 但是,有了这个要求,我们就需要“某些东西”来在服务器被配置后向服务器提供机密/配置。 由于Terraform负责站起我们的服务器,因此在构建时使用它来提供必要的信息是很有意义的。

While Terraform plays a very important part in this process, its role is fairly small and straight forward. When you are provisioning an instance, AWS and other providers give you the ability to specify “User Data”.

尽管Terraform在此过程中起着非常重要的作用,但其作用却很小而且很直接。 在配置实例时,AWS和其他提供程序使您能够指定“用户数据”。

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts.

在Amazon EC2中启动实例时,可以选择将用户数据传递到该实例,该数据可用于执行常见的自动配置任务,甚至在实例启动后运行脚本。

Using this mechanism we plan to have Terraform do two simple things:

使用这种机制,我们计划让Terraform做两件事:

  • Create a bash script that contains all the configs/secrets needed by the server

    创建一个bash脚本,其中包含服务器所需的所有配置/秘密
  • Execute an install script that will load the secrets and configure the server.

    执行一个安装脚本,该脚本将加载机密并配置服务器。

Now that we are reviewed the design and all the major components, we can get started!

现在,我们已经回顾了设计和所有主要组件,现在就可以开始!

让我们开始建设! (Let’s Get Building!)

To get started we are going to create a “packer” directory, and two subdirectories called “base-image” & “assets”.

首先,我们将创建一个“ packer”目录,以及两个名为“ base-image”和“ assets”的子目录。

mkdir -p packer/{assets,base-image}

In the future, as we create additional images, they’ll simply be added here in their own folders. The “assets” directory will be used to store dependencies like our Ansible code. This structure will make it easy to organize our files and also makes it very easy to automate via CI/CD.

将来,当我们创建其他图像时,只需将它们添加到自己的文件夹中即可。 “资产”目录将用于存储依赖项,例如我们的Ansible代码。 这种结构将使组织文件变得容易,并且通过CI / CD进行自动化也非常容易。

In the base-image directory we are going to create two files:

base-image目录中,我们将创建两个文件:

  • variables.pkr.hcl — As you can infer, this will store our variables

    variables.pkr.hcl —如您所知,它将存储我们的变量
  • base.pkr.hcl — This is the file where we define how we want our image to be built.

    base.pkr.hcl —这是文件,在其中定义了我们希望如何构建图像。

Note: The name of these files do not matter to Packer, but they make it easier for us to organize our content.

注意:这些文件的名称对Packer无关紧要,但是它们使我们可以更轻松地组织内容。

variables.pkr.hcl (variables.pkr.hcl)

Using variables in Packer is easy. Our variables file will look something like this:

在Packer中使用变量很容易。 我们的变量文件将如下所示:

variable "ami-description" {
type = string
default = "My custom Ubuntu Image"
}variable "aws_access_key" {
type = string
default = ""
}variable "aws_secret_key" {
type = string
default = ""
}variable "aws_profile" {
type = string
default = "myAWSProfile"
}variable "aws_acct_list" {
type = list(string)
default = [
#acctA
"000000000000",
#acctB
"111111111111"
}variable "destination_regions" {
type = list(string)
default = [
"us-west-1",
"us-west-2"]
}variable "fmttime" {
type = string
default = "{{isotime \"2006-01-02-150405\"}}"
}variable "source_image_name" {
type = string
default = "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-"
}variable "ssh_user" {
type = string
default = "ubuntu"
}

Once defined, variables can be easily referenced in Packer using the format ${varnamehere}. In this particular example, I want to point out a few things:

定义后,可以使用${varnamehere}格式在Packer中轻松引用变量。 在此特定示例中,我想指出一些事情:

  • We are defining an aws_profile. The secret and access keys are intentionally defined as blank. This configuration will use your AWS profile to authenticate. If you want to use access/secret keys instead, omit the profile variable.

    我们正在定义aws_profile。 秘密密钥和访问密钥有意定义为空白。 此配置将使用您的AWS配置文件进行身份验证。 如果要改用访问/秘密键,请省略配置文件变量。
  • The “aws_acct_list” variable will be used to tell Packer all the accounts our image should be shared with.

    “ aws_acct_list”变量将用于告诉Packer我们应与我们的映像共享的所有帐户。
  • The “destination_regions” list will tell Packer what regions my image should be available in.

    “ destination_regions”列表将告诉Packer我的图像应在哪些区域可用。

As mentioned earlier, all of the above will allow us to control our images from a centralized AWS account while providing access from our sub-accounts.

如前所述,上述所有内容都将使我们能够从集中式AWS账户控制图像,同时提供对子账户的访问。

base.pkr.hcl —源节 (base.pkr.hcl — source section)

The base.pkr.hcl file will define the image we want to build and how we want it to be configured.

base.pkr.hcl文件将定义我们要构建的映像以及我们希望如何配置它。

We start our file by specifying a source section. This section will contain all the information needed to create a temporary instance. The second and final part of this file is the build section. This portion of the code will let Packer know how the temporary image should be customized.

我们通过指定source节开始文件。 本节将包含创建临时实例所需的所有信息。 该文件的第二部分也是最后一部分是build部分。 这部分代码将使Packer知道应该如何定制临时映像。

Here is the source section in its entirety. It will be easier to walk through what we are aiming for if you can see the entire picture

这是完整的源代码部分。 如果您可以看到整个图片,则更容易遍历我们的目标

source "amazon-ebs" "example" {
ami_name = "custom/ubuntu-${var.fmttime}"
ami_description = "${var.ami-description}"
ami_users = "${var.aws_acct_list}"
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
profile = "${var.aws_profile}"
region = "us-west-1"
instance_type = "t3.small"
ami_regions = "${var.destination_regions}"
associate_public_ip_address = true
communicator = "ssh"
ssh_username = "${var.ssh_user}"vpc_filter {
filters = {
"tag:Name": "myVPC",
"isDefault": "false"
}
}
subnet_filter {
filters = {
"state": "available",
"tag:Name": "*public*"
}
random = true
}source_ami_filter {
filters = {
name = "${var.source_image_name}*"
virtualization-type = "hvm"
root-device-type = "ebs"
}
owners = [
"099720109477"]
most_recent = true
}run_tags = {
OS_Version = "Ubuntu"
}tags = {
OS_Version = "Ubuntu"
Name = "custom/ubuntu-${var.fmttime}"
}}

Most of these settings are pretty self-explanatory. We are doing things like naming the image, providing a description, defining tags, etc… The image name will contain a timestamp and look something like custom/ubuntu-2020-01-20-185613. This naming convention will ensure all the images are unique and make them sortable.

这些设置大多数都是不言自明的。 我们正在做诸如命名图像,提供描述,定义标签等操作……图像名称将包含一个时间戳,看起来类似于custom/ubuntu-2020-01-20-185613 。 此命名约定将确保所有图像都是唯一的并使它们可排序。

A few settings we should expand on:

我们应该扩展一些设置:

ami_users is used to tell Packer what AWS accounts should have access to your custom image. This ties into the aws_acct_list variable we defined earlier.

ami_users用于告诉Packer哪些AWS账户应有权访问您的自定义映像。 这与我们之前定义的aws_acct_list变量相关。

variable "aws_acct_list" {
type = list(string)
default = [
#acctA
"000000000000",
#acctB
"111111111111"
}

If you only have a single AWS account, you can omit this setting.

如果您只有一个AWS账户,则可以忽略此设置。

aws_regions determines what regions the resulting AMI should be copied into. If you are only operating in a single region, you can omit this variable. Even if you are in a single region though, I would still recommend keeping this in place so that in a DR scenario, your images are ready and waiting.

aws_regions确定应将生成的AMI复制到哪个区域。 如果仅在单个区域中操作,则可以忽略此变量。 即使您位于单个区域,我仍然建议您将其保留在适当的位置,以便在灾难恢复情况下,映像已准备就绪并且正在等待。

The VPC and Subnet filters are used to determine the VPC and subnet that the temporary instance is going to be created in. These values can either be hardcoded, or you can create a filter to dynamically find them. You can also omit this section if you have a“default” VPC in AWS. We chose to use the filter in case we ever re-provisioned our VPCs:

VPC和子网过滤器用于确定要在其中创建临时实例的VPC和子网。这些值可以是硬编码的,也可以创建过滤器以动态查找它们。 如果您在AWS中拥有“默认” VPC,则也可以省略此部分。 我们选择使用过滤器,以防重新配置VPC:

vpc_filter {
filters = {
"tag:Name": "myVPC",
"isDefault": "false"
}
}
subnet_filter {
filters = {
"state": "available",
"tag:Name": "*public*"
}
random = true
}

We are going to match off the AWS “Name” tag for both the VPC and subnet. Note: You need SSH access to the temporary instance so make sure you define your VPC/subnet accordingly.

我们将为VPC和子网匹配AWS“名称”标签。 注意:您需要通过SSH访问临时实例,因此请确保相应地定义VPC /子网。

The source AMI is the image that the temporary instance will be launched with. We want to make sure we have the latest Ubuntu image available and using the source_ami_filter allows us to do that.

源AMI是将用来启动临时实例的映像。 我们要确保我们拥有最新的Ubuntu映像,并且使用source_ami_filter可以做到这一点。

source_ami_filter {
filters = {
name = "${var.source_image_name}*"
virtualization-type = "hvm"
root-device-type = "ebs"
}
owners = [
"099720109477"]
most_recent = true
}

We are looking for an image that starts with ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-, we want the latest image, and this image must come from Ubuntu’s AWS account (account #: 099720109477).

我们正在寻找以ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-开头的ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server- ,我们想要最新的映像,并且该映像必须来自Ubuntu的AWS账户(账户号: 099720109477 )。

With this approach, any time Ubuntu releases a new image, our build process will be automatically using it.

通过这种方法,Ubuntu每次发布新映像时,我们的构建过程都会自动使用它。

base.pkr.hcl —构建部分 (base.pkr.hcl — build section)

Once the temporary instance is built using the information from the source section, Packer will configure the instance using the steps provided in the build section.

一旦使用source部分中的信息构建了临时实例,Packer将使用build部分中提供的步骤配置实例。

build {
sources = [
"source.amazon-ebs.example"
]provisioner "ansible" {
user = "${var.ssh_user}"
playbook_file = "../assets/ansible/provision-base-server.yml"
extra_arguments = [ "--extra-vars", "os_ignore_users: [\"${var.ssh_user}\"] os_filesystem_whitelist: [\"squashfs\"]" ]
}// provisioner "inspec" {
// inspec_env_vars = [ "CHEF_LICENSE=accept"]
// profile = "https://github.com/dev-sec/linux-baseline"
// }
}

Since most of our configuration is coming from Ansible, our build section is pretty straight forward. We tell Packer to launch our Ansible playbook. I also included the code for the Inspec provisioner for reference but left it commented out. As a reminder, Inspec will validate that your image has been hardened. This works in tandem with the ansible-role we use for hardening our servers. We have it commented out since our custom settings cause the vanilla validation to fail so it will be tweaked in future iterations.

由于我们的大多数配置来自Ansible,因此我们的构建部分非常简单。 我们告诉Packer启动我们的Ansible剧本。 我还提供了供Inspec设置程序使用的代码以供参考,但未对其进行注释。 提醒您,Inspec将验证您的图像是否已硬化。 这与我们用来加固服务器的ansible角色协同工作。 我们已将其注释掉,因为我们的自定义设置会导致原始验证失败,因此将在以后的迭代中进行调整。

This is the entirety of what you need on the packer end.

这是打包机端所需的全部内容。

Ansible (Ansible)

As mentioned earlier, Ansible allows you to configure your servers in an automated fashion. I won’t jump into this section too heavily since custom images by their very definition have to be tailored to your use-case. I do want to highlight a few important patterns, as well as give you an idea of my approach that you can use to get started. The playbook called by Packer looks something like this:

如前所述,Ansible允许您以自动化方式配置服务器。 由于自定义图像的定义必须根据您的用例进行定制,因此我不会过多地介绍这一部分。 我确实想强调一些重要的模式,并向您介绍可以用来入门的我的方法。 Packer调用的剧本看起来像这样:

- hosts: all
become: yes
become_user: root
become_method: sudoroles:
- { role: update-pkgs, tags: ["updatepkgs"] }
- { role: install-monitoring, tags: ["monitoring"] }
- { role: aws-inspector, tags: ["inspector"] }
- { role: install-secops-tool, tags: ["secops"] }
- { role: dev-sec.os-hardening, tags: ["os-hardening"]}
- { role: haveged, tags: ["haveged"] }
- { role: post-provisioner, tags: ["post"] }

We are updating the packages on the server, installing monitoring/security tools, hardening the image, and installing “haveged” to generate entropy on the servers (also a security thing).

我们正在更新服务器上的软件包,安装监视/安全工具,强化映像,并安装“ haveged”以在服务器上生成熵(也是安全的事情)。

If you are not familiar with Ansible or want an idea of what I am doing in these roles, here is an example of one.

如果您不熟悉Ansible或想了解我在这些角色中的工作,请举一个例子。

---
- name: Ubuntu | Upgrade all current packages
apt:
update_cache: true
upgrade: dist
allow_unauthenticated: true
when: ansible_os_family in ['Debian', 'Ubuntu']- name: Ubuntu | Install Unattended-upgrades
apt:
update_cache: true
name: unattended-upgrades
state: present
when: ansible_os_family in ['Debian', 'Ubuntu']- name: Ubuntu | Configure Unattended-upgrades
copy:
src: 50unattended-upgrades.conf
dest: /etc/apt/apt.conf.d/50unattended-upgrades
owner: root
group: root
mode: 0644
when: ansible_os_family in ['Debian', 'Ubuntu']- name: Ubuntu | Configure Unattended-upgrades Timing
copy:
src: 20auto-upgrades.conf
dest: /etc/apt/apt.conf.d/20auto-upgrades
owner: root
group: root
mode: 0644
when: ansible_os_family in ['Debian', 'Ubuntu']

This example is from the update packages role. It essentially uses apt-get to make sure all the packages are up to date. We then also install and configure “unattended-upgrades” so the server will periodically auto-update itself.

本示例来自更新程序包角色。 它实质上使用apt-get来确保所有软件包都是最新的。 然后,我们还安装和配置“无人值守升级”,以便服务器将定期自动更新自身。

The main thing I want to focus on in Ansible is the “post-provisioner” role. As mentioned earlier, we do not want to store any sensitive information on these images. The post-provisioner is the mechanism that allows Terraform to inject the secrets at build time and bootstrap our servers.

我想在Ansible中重点关注的是“后置供应商”角色。 如前所述,我们不想在这些图像上存储任何敏感信息。 后置供应商是一种机制,允许Terraform在构建时注入秘密并引导我们的服务器。

When creating this role, we didn’t want to tightly couple Terraform with our images. This means that Terraform should not have to know the 20 scripts needed to configure image Y, or the 10 scripts needed to configure image X. Instead, Terraform should just kick off a single script, and the rest would happen automatically.

创建此角色时,我们不想将Terraform与我们的图像紧密结合。 这意味着Terraform不必知道配置映像Y所需的20个脚本或配置映像X所需的10个脚本。相反,Terraform应该只启动一个脚本,其余的将自动发生。

That single script is our install.sh :

那个脚本就是我们的install.sh

#!/usr/bin/env bashfo