31

I can't SSH into my EC2 instances - I am getting a timeout error. I checked the security groups to ensure that SSH traffic is working. I checked the routing tables and ensured that they are connected to an internet gateway. I was able to SSH into the instances just an hour ago but no longer. I am connecting via Putty. I had the same timeout issue connecting before using ec2-user@domain which I solved by simply entering the ip address into the hostname field in Putty. At that point I was able to connect without a problem. I then created another EC2 instance and now I cannot connect to either instance. I have the .ppk file being correctly referenced in my Putty config. I tried connecting with a mac and copying the .pem file there. Is there anything else I can check? Also, why could I not type ec2-user@domain into the connection field in putty like the directions indicate. Is there something wrong with my AWS environment?

Barodapride
  • 3,475
  • 4
  • 25
  • 36
  • Are you whitelsting access to port 22 by IP address in your security group? If so, check if your local external IP address changed? – Mathew Tinsley Mar 18 '19 at 01:05
  • Connection time out during SSH is not a instance issue. It is generally related to security group or domain name of your instance FYI the domain name changes if you restart your instance and don't have an elastic IP. I would suggest checking the security group associated with your instance again configure SSH by selecting 'My IP' from the drop down menu. Refer this [link](https://alestic.com/2014/01/ec2-ssh-username/) for default username for EC2 instances – bot Mar 18 '19 at 01:32
  • 1
    I am allowing SSH connections from all IP addresses in the security group. I submitted a ticket with AWS because I saw other people had issues with reactivated accounts that mine is. – Barodapride Mar 18 '19 at 01:38
  • see this answer it works: https://stackoverflow.com/a/57961330/3904109 – DragonFire Oct 13 '22 at 06:15

7 Answers7

42

The best way to diagnose an SSH problem is to launch a new instance in the same subnet, using the same security group. If this works, then the problem is related to the original instance.

The fact that you are receiving a timeout error indicates that your SSH client has been unable to reach the instance. The instance is not rejecting the connection (eg due to a keypair), it is the fact that the instance cannot be reached.

Things to check:

  • Confirm that the Public IP address is still current (it can change it the instance is stopped and started)
  • Confirm that the instance is in a public subnet, which means a subnet that has a Route Table pointing to an Internet Gateway
  • Confirm that the security group is permitting inbound SSH traffic (port 22) from your IP address (or even 0.0.0.0/0 for testing purposes)
  • Keep NACLs at default settings unless you understand them deeply
  • Make sure the instance is a Linux instance (Windows does not have SSH enabled)
  • Try it from a different network (eg home, office, tethered via your phone) because some corporate networks block SSH connections

As another test, you might want to temporarily create another VPC. Use the VPC Wizard to create a VPC with just a single, public subnet. Launch an instance and confirm that you are able to SSH into the instance.

MD. Khairul Basar
  • 4,976
  • 14
  • 41
  • 59
John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
  • 1
    I created new VPC myself. There was no Route Table pointing to an Internet Gateway. After create and add new Internet Gateway to Route Table, ssh to ec2 works well. Thanks! – Jay Lim Mar 14 '21 at 08:21
  • Kudos, no route to an internet gateway on my public subnet, lifesaver! – Ebikeneser Nov 17 '22 at 14:23
5

This issue was an account issue. I had reactivated my old account but somehow it was still flagged as 'isolated' within AWS. I had access to the AWS console, but I couldn't SSH into anything. As a user, there is no way to see this yourself. I had to post on the AWS developer forums where an AWS developer was able to see that my account was 'isolated' and submitted a ticket on my behalf. I am now able to SSH into my EC2 instance with no problem.

Barodapride
  • 3,475
  • 4
  • 25
  • 36
  • This problem was driving me crazy, thanks for posting your solution here. I can confirm that I had the same exact issue and Amazon was able to fix it when I submitted a case. – alanxoc3 Dec 31 '19 at 15:31
3

I had to manually create a new Internet Gateway and then add Routing from 0.0.0.0/0 to it into the Routing Table of my VPC Subnet, as explained here.

yegor256
  • 102,010
  • 123
  • 446
  • 597
  • This is exactly what I experienced. The destination I set was `0.0.0.0/16` which caused my issue. After I changed to `0.0.0.0/0`, the ssh connection can be established as well as the instance connect. – fsevenm Apr 08 '23 at 14:46
2

If you've implemented the other solutions on this thread and they still don't solve your timeout problem, here's something that worked for me:

Simply edit your public Route Table (which should be associated with the subnet where your EC2 instance is). Add an Outbound Rule to allow all TCP traffic on ports 1024-65535.

I learned about this in an ACloudGuru AWS course (certified Solutions Architect, Associate level)--the basic idea is that when you initially connect to port 22, your session will be moved to an "ephemeral port" (between 1024-65535 on the instance itself) which is only used for the duration of your session. When your session is over, the port will become free again. This allows new incoming connections to the instance's port 22 to be translated into sessions. Essentially the purpose is to allow an instance to serve multiple incoming SSH connections concurrently.

2

Spencer's answer solved it for me. It seems like that is the case, one small correction though: you need to edit the Outbound Rule on the Network ACL.

What I did from scratch:

  • Create VPC
  • Create a subnet in that VPC
  • Launch an EC2 instance
  • Follow: this link, add everything that they specify
  • Change the Network ACL outbound rule to contain the port range specified by Spencer: 1024-65535
  • Done, you can connect now

Note, that you won't be able to ping the instance if ICMP traffic is not allowed.

0

Another possible problem/solution pair here :) I ran into a similar problem - connecting to a freshly created AWS EC2 instance failed using ssh-ed25519 type key from Ubuntu 20.04. There were no guiding error messages neither on /var/log/auth.log (on the server) nor in ssh -v -i /path/to/key.pem ubuntu@ec2host output. I was already pulling my hair. Tried stopping and restarting instance, nothing.

Then I just used amazon's web ssh to add a new key pair to /home/ubuntu/.ssh/authorized_keys and did sudo systemctl restart ssh and the new ssh-ed25519 key started working. And - the old one started working too (I did not delete it). I do not know if it was something to do with either whitespace in the authorized_keys file or ssh service did not load the configuration correctly.

There is a similar thread on EC2 public key formatting that might be related.

So if you do not see any errors but are unable to connect to EC2 instance using SSH, you might try repeating this process.

MF.OX
  • 2,366
  • 1
  • 24
  • 28
0

I just had this problem coming back to AWS after a long hiatus.

I created a new, default VPC, and the wizard-like "network guidance" pop-up in the right most window-pane and assured me my dreams of SSH would be fulfilled by the unmodified defaults.

I had to make one change to get my SSH connection connected. This was to add my IP (or 0.0.0.0) to the inbound rules for the security group. If you look, the only existing group already allows all ports, but only to traffic coming from the same security group. So now I have two rules, in my default security group, which I'm always going to want:

Inbound rules on my default VPC's default security group

John
  • 6,433
  • 7
  • 47
  • 82