167

I got the following error during a terraform plan which occured in my pipeline:

Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
Lock Info:
ID:        9db590f1-b6fe-c5f2-2678-8804f089deba
Path:      ...
Operation: OperationTypePlan
Who:       ...
Version:   0.12.25
Created:   2020-05-29 12:52:25.690864752 +0000 UTC
Info:      
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.

It is weird because I'm sure there is no other concurrent plan. Is there a way to deal with this? How should I remove this lock?

veben
  • 19,637
  • 14
  • 60
  • 80

14 Answers14

243

Cause of Error

This error usually appears when one process fails running terraform plan or terraform apply. For example if your network connection interrupts or the process is terminated before finishing. Then Terraform "thinks" that this process is still working on the infrastructure and blocks other processes from working with the same infrastructure and state at the same time in order to avoid conflicts.

As stated in the error message, you should make sure that there is really no other process still running (e.g. from another developer or from some build-automation). If you force-unlock in such a situation you might screw up your terraform state, making it hard to recover.

Resolution

If there is no other process still running: run this command

terraform force-unlock 9db590f1-b6fe-c5f2-2678-8804f089deba

(where the numerical id should be replace by the one mentioned in the error message)

if you are not sure if there is another process running and you are worried that you might make things worse, I would recommend waiting for some time (like 1h), try again, then try again after maybe 30 min. If the error still persists it is likely that there really is no other process and it's safe to unlock as described above

Stéphane Bruckert
  • 21,706
  • 14
  • 92
  • 130
Falk Tandetzky
  • 5,226
  • 2
  • 15
  • 27
  • 3
    what if I have closed my terminal session and therefore I am not able to access the numerical id – ashishpm Sep 06 '21 at 09:27
  • 3
    when u try to do an operation that needs the state lock then the error will come up again, then you can get the ID – cryanbhu Sep 07 '21 at 10:03
  • 3
    I tried to force-unlock and i got this error, am using a GCS state backend. `Failed to unlock state: 2 errors occurred: * storage: object doesn't exist * storage: object doesn't exist` – cryanbhu Sep 10 '21 at 02:58
  • 8
    Got this error: `Local state cannot be unlocked by another process` – TrevorDeTutor Sep 21 '22 at 19:27
124

It looks like the lock persist after the previous pipeline. I had to remove it using the following command in order to remove it:

terraform force-unlock -force 9db590f1-b6fe-c5f2-2678-8804f089deba

Or to relaunch the plan with the following option -lock=false

terraform plan -lock=false ...
veben
  • 19,637
  • 14
  • 60
  • 80
17

I ran into the same issue when i was using terraform with S3 and dynamodb backend

Reason : i forcefully terminate apply process that actually prevents me to aquire the lock again

Solution : when it gets fail it return an ID with the ID we can unlock forcefully

terraform force-unlock -force 6638e010-8fb0-46cf-966d-21e806602f3a
Mansur Ul Hasan
  • 2,898
  • 27
  • 24
14

Even I had the same issue and tried with different command terraform force-unlock -force and terraform force-unlock <process_id> but didn't worked for me. a quick workaround for the problem is kill that particular process id and run again.

ps aux | grep terraform and sudo kill -9 <process_id>

Dharman
  • 30,962
  • 25
  • 85
  • 135
Pachha Gopi
  • 141
  • 1
  • 2
  • That was the case. I first thought that the process is hanging in the cloud side, but it is actually on my local. killing the process solved the issue. – aykcandem Jul 27 '22 at 13:04
  • I was facing the same issue. I ran the terraform plan command and it asked me to input the value and I pressed `ctrl+z` to exit. But it created a process in the background. Once I killed the process, everything worked smoothly. – Ali Hassan Sep 12 '22 at 00:16
  • I think it is not the process_id. When you call "terraform plan", it would show out the ID of the Terraform Process Lock ID (not the one on the Task Manager). Copy the ID and run terraform force-unlock – Take Ichiru May 04 '23 at 03:29
6

If terraform force-unlock is giving below error: "Local state cannot be unlocked by another process" then open the running process and kill the process to remove the lock. for Windows: open task manager and search for terraform console process For Linux: grep for terraform process and kill the terraform console process using kill -9

  • Im getting failed to retrieve lock info: unexpected end of JSON input – Judy007 Feb 05 '21 at 20:25
  • I should also add that in case you are using Terragrunt as well, you should run `terragrunt force-unlock` instead, so it knows the configuration and probably where the state file is and how it's called.. Otherwise you'll get the same error that it "cannot be unlocked by another process". – Dennis98 Jun 10 '22 at 15:34
4

For anyone running into this issue when running Terraform against AWS, make sure you're running against the expected profile. I ran into this issue today and realised that I needed to switch my profile:

$ export AWS_PROFILE=another_one
Rob
  • 981
  • 12
  • 27
2

I got the state lock error because I was missing s3:DeleteObject and dynamodb:DeleteItem permissions.

I had get and put permissions, but not delete. So my CircleCI IAM user could check for locks and add locks, but couldn't remove locks when it was done updating state. (Maybe I had watched tutorials that used remote state but didn't use state locking.)

These steps fixed the issue:

  1. Run terraform force-unlock <error message lock ID> (I got this step from Falk Tandetzky and veben's answers)
  2. Allow "s3:DeleteObject" permission for the resource "arn:aws:s3:::mybucket/path/to/my/key"
  3. Allow "dynamodb:DeleteItem" permission for the resource "arn:aws:dynamodb:*:*:table/mytable"

All the permissions, with examples, are listed in the Terraform S3 backend documentation:

https://www.terraform.io/language/settings/backends/s3

user3827510
  • 149
  • 1
  • 7
1

I experienced this issue when trying to run a terraform command for a resource in Google Cloud Console within the GitHub actions workflow.

The issue started after I cancelled a running terraform apply command.

When I tried to run other terraform commands like the terraform apply -refresh-only command, I got the error below:

Run terraform apply -var-file env/dev-01-default.tfvars -refresh-only -auto-approve
/runner/_work/_temp/8cdffd5c-b7a1-446d-a294-c1e34b63cde4/terraform-bin apply -var-file env/dev-01-default.tfvars -refresh-only -auto-approve
╷
│ Error: Error acquiring the state lock
│ 
│ Error message: writing "gs://my-dev/k8s/default/dev-01.tflock" failed:
│ googleapi: Error 412: At least one of the pre-conditions you specified did
│ not hold., conditionNotMet
│ Lock Info:
│   ID:        1688478112611453
│   Path:      gs://my-dev/k8s/default/dev-01.tflock
│   Operation: OperationTypeApply
│   Who:       runner@pm-runners-m4h7k-gcxqk
│   Version:   1.1.8
│   Created:   2023-07-04 13:41:52.517578046 +0000 UTC
│   Info:      
│ 
│ 
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.

Here's how I fixed this:

This issue created a lockfile called dev-01.tflock in the location gs://my-dev/k8s/default/dev-01.tflock with a lock-info-id of 1688478112611453

One way is to run the command below:

terraform force-unlock -force <lock-info-id>

Another way is to locate the location of the lockfile in the Google cloud storage account where the statefile is located. And within the same directory where the statefile is stored you will the lockfile. Go ahead to delete the lockfile.

Afterwhich you can run the terraform command you want to run and it will run fine.

Promise Preston
  • 24,334
  • 12
  • 145
  • 143
0

GCP: In my case the issue is resolved after changing permission to "Storage Object Admin" in Google cloud storage.

0

It was AWS CLI session issue with me, I relogged in by using gimme-aws-creds command from command prompt and then tried. It worked.

m4n0
  • 29,823
  • 27
  • 76
  • 89
chandrgupt
  • 41
  • 2
0

I've run through the same issue in AWS and our pipele. We are transitioning to git-actions. Our terraform is using dynamodb as its lockstate persistence and s3 to hold the actual terraform statefile. When I looked at the lock state in the dynamodb, the md5 digest column is empty and the key did not indicate and -md5, just a normal .

Note: Do not try this if you are not familiar with Terraform State File.

What I did is I cloned the said lockstate and renamed to -md5. Look at my s3 statefile for the hashkey and copy it over to the digest column in dynamo table. Rename the old lockstate to a different key so that it wont be searched up.

That's it for me.

Again, this may not work for everybody but this worked for me.

Rodel
  • 147
  • 1
  • 6
0

When using the -lock=false on any cloud provider should be done with caution. As using it means you're removing the safeguard put in place to prevent conflict especially if you're working in a team.

The usual cause of state lock errors is either a or some terraform processes that are currently ongoing. For example, the terraform console is currently active and a plan or apply is being executed at the same time.

Try listing the terraform processes currently active and kill them

0

In my situation the issue that was causing this error was due to running out of memory while in middle of running a terraform command which caused the command to get killed mid run.

The reason I was running out of memory was due to the fact that the Terraform command was running within a Docker container on my Mac. In order to fix this I needed to increase the Docker VM's memory limit in addition to running force-unlock to free up the locked state.

Dovid Gefen
  • 323
  • 3
  • 17
0

If you messed up with lock ID and DynamoDB records. You can put an empty JSON in the Info field and run the command with an empty string to unlock the state.

terraform force-unlock ""