1

Error: “multiple VPC Endpoints matched”

I am using a data “aws_vpc_endpoint” to retrieve multiple endpoint IDs based on the vpc ID. How can I retrieve these endpoints to reference them in another resource? Or is it possible to retrieve multiple endpoint from this data resource. Any suggestions? Or advice would be much appreciated. Here is the code snippet. The count.index has been accounted for correctly already in resource "aws_route" now I am focused on retrieving multiple endpoints to add to the aws_route.

data "aws_vpc_endpoint" "firewall-endpoints" { 
  vpc_id = aws_vpc.vpc.id

  filter {
    name = "tag:Example"
    values = [true]
  }
}

resource "aws_route" "example" {
  count                  = var.number_azs
  route_table_id         = aws_route_table.example[count.index].id
  destination_cidr_block = var.tgw_aws_route[0]
  vpc_endpoint_id = data.aws_vpc_endpoint_service.firewall-endpoints.id
}
mp7
  • 15
  • 4
  • What is the full error message, showing exactly the lines where the error occurs? – Marcin May 26 '22 at 02:43
  • The Error: “multiple VPC Endpoints matched” use additional constraints to reduce matches to a single VPC endpoint. I am wanting to return multiple VPC endpoint IDs from data resource to be referenced in vpc_endpoint_id. The data resource is finding multiple vpc endpoints with that filter and that is correct. That is exactly what I want. However, how would I use count or for_each or any other function, expression etc to retrieve each vpc endpoint id. – mp7 May 26 '22 at 13:44

2 Answers2

1

The documentation is pretty explicit:

The arguments of this data source act as filters for querying the available VPC endpoints. The given filters must match exactly one VPC endpoint whose data will be exported as attributes.

If you want to use VPC endpoints for multiple services, you'll need to create a data source for each one. This could be done concisely with for_each.


Update: I'm not sure how your endpoints are set up, but you need to find a unique way to refer to them. An example of using for_each here could look like this:

locals {
  services = {
    s3  = "com.amazonaws.us-east-2.s3"
    ssm = "com.amazonaws.us-east-2.ssm"
  }
}

data "aws_vpc_endpoint" "services" {
  for_each = local.services

  vpc_id = aws_vpc.vpc.id
  service_name = each.value
}

To then use the endpoint, you can refer to it as e.g. data.aws_vpc_endpoint.services["s3"].id. And if you want to loop over them, you can again refer to the local.services dictionary.

Nick K9
  • 3,885
  • 1
  • 29
  • 62
  • Hi Nick could you give an example on how to use the for_each in data resource to reflect in vpc_endpoint_id? – mp7 May 26 '22 at 13:37
  • Did the example help? – Nick K9 May 30 '22 at 13:25
  • It's hard to see what's going on here with no formatting a so little context. If the question hasn't been answered, you can add an update to the end of the original question. But if you've gotten past the "Multiple endpoints matched" error now, then you can accept the answer and ask a new question about the new problem. – Nick K9 May 30 '22 at 22:38
  • 1
    My apologies I did not mean to post that but yes I am passed the "multiple endpoints matched" to "no matching VPC endpoint found" I will post another question. – mp7 May 31 '22 at 13:26
0

You can try aws_resourcegroupstaggingapi_resources to return multiple resources that have specific tags:

data "aws_resourcegroupstaggingapi_resources" "test" {

  tag_filter {
    key    = "Example"
    values = ["tag-value-1", "tag-value-2"]
  }
}

you can add resource_type_filters but I'm not sure what is the type for VPC endpoints.

Marcin
  • 215,873
  • 14
  • 235
  • 294
  • That was clever. I did not think of this. Thank you. However, this data resource does not export IDs from what the tags is referencing from. I think I'm going have to use for_each. Just trying to figure out how I would do so. And the type filter for vpc endpoints is "ec2:vpcendpoint" Ref: https://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html#awsnetworkfirewall – mp7 May 26 '22 at 15:54