2

Summary: Code and configuration known to show up in NoSQL Workbench when using DynamoDB Local mysteriously don't work with LocalStack: though the connection works, the tables no longer show in NoSQL Workbench (but continue to show up when using the aws-cli).


I created a table in DynamoDB Local running in Docker that worked in NoSQL Workbench. I wrote code to seed that database, and it all worked and showed up in NoSQL Workbench.

I switched to LocalStack (so I can interact with other AWS services locally). I was able to create a table with Terraform and can seed it with my code (using the configuration given here). Using the aws-cli, I can see the table, etc.

But inside NoSQL Workbench, I couldn't see the table I created and seeded when connecting as shown below. There weren't connection errors; the table just isn't there. It doesn't seem related to the bugginess issue described here, as restarting the application did not help. I didn't change any AWS account settings like region, keys, etc.

NoSQL Workbench screenshot showing connection settings

Ethan Kent
  • 381
  • 1
  • 4
  • 20

2 Answers2

3

If you don't want to change your region to localhost, there is another solution. From the LocalStack docs:

"DYNAMODB_SHARE_DB: When activated, DynamodDB will use a single database instead of separate databases for each credential and region."

e.g. add the variable to your docker-compose.yml

  ...

  localstack:
    container_name: my_localstack
    image: "localstack/localstack:0.13.0.8"
    environment:
      - DYNAMODB_SHARE_DB=1   

  ...

jimasp
  • 962
  • 9
  • 26
2

Summary: To use NoSQL Workbench with LocalStack, set the region to localhost in your code and Terraform config, and fix the resulting validation error (saying there isn't a localhost region) by setting skip_region_validation to true in the aws provider block in the Terraform config.


The problem is disclosed in the screenshot above:

Help text in NoSQL Workbench

NoSQL Workbench uses the localhost region.

When using DynamoDB Local, it appears the region is ignored, so this quirk is hidden (i.e. there is a mismatch between the region in the Terraform file and my code on the one hand and NoSQL Workbench on the other, but it doesn't matter with DyanmoDB Local).

But with LocalStack region is not ignored, so the problem popped up.

I wouldn't have written this up except for one more quirk that took a while to figure out. When I updated the Terraform configuration thus:

provider "aws" {
  access_key = "mock_access_key"
  // For compatibility with NoSQL workbench local connections
  region                      = "localhost"

I started getting this error when running terraform apply:

╷
│ Error: Invalid AWS Region: localhost
│
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on main.tf line 1, in provider "aws":
│    1: provider "aws" {
│
╵

I dug around a bit and found this issue in the AWS provider repo for Terraform, which explains that you should do this:

provider "aws" {
  access_key = "mock_access_key"
  // For compatibility with NoSQL workbench local connections
  region                      = "localhost"
  skip_region_validation      = true
Ethan Kent
  • 381
  • 1
  • 4
  • 20