2

I have deployed a simple Hyperledger network to AWS, using separate VM instances for an orderer, two peers, and their respective CAs. So I have one VM instance for the orderer, one instance for the orderer CA, and so on.

I am able to start the network, create a channel, and deploy the sample fabcar chaincode. I can also query and invoke the chaincode from inside any of the two peers just fine using peer chaincode query/invoke.

The problem I’m experiencing happens when I use a node.js application (that is outside of the network and is using the fabric-network module) to query or invoke the chaincode. I keep getting the following error:

error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: 172.31.42.206:7050, url:grpcs://172.31.42.206:7050, connected:false, connectAttempted:true
error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server 172.31.42.206:7050 url:grpcs://172.31.42.206:7050 timeout:3000
error: [DiscoveryService]: _buildOrderer[channel1] - Unable to connect to the discovered orderer 172.31.42.206:7050 due to Error: Failed to connect before the deadline on Committer- name: 172.31.42.206:7050, url:grpcs://172.31.42.206:7050, connected:false, connectAttempted:true

When I query the chaincode, I get the above error, but I still get the query results. However, when I try to invoke, I get the error and the chaincode invocation does not go through.

My connection profile is as follows:

{
    "name": "test-network-org1",
    "version": "1.0.0",
    "client": {
        "organization": "Org1",
        "connection": {
            "timeout": {
                "peer": {
                    "endorser": "300"
                }
            }
        }
    },
    "organizations": {
        "Org1": {
            "mspid": "Org1MSP",
            "peers": [
                "peer0.org1.example.com"
            ],
            "certificateAuthorities": [
                "ca.org1.example.com"
            ]
        }
    },
    "peers": {
        "peer0.org1.example.com": {
            "url": "grpcs://peer0.org1.example.com:7051",
            "tlsCACerts": {
                "pem": "-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----\n"
            },
            "grpcOptions": {
                "ssl-target-name-override": "peer0.org1.example.com",
                "hostnameOverride": "peer0.org1.example.com"
            }
        }
    },
    "certificateAuthorities": {
        "ca.org1.example.com": {
            "url": "https://ca.org1.example.com:7054",
            "caName": "ca-org1",
            "tlsCACerts": {
                "pem": ["-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----\n"]
            },
            "httpOptions": {
                "verify": false
            }
        }
    }
}

In the node.js application, I have discovery.enabled set to True and discovery.asLocalhost set to False. I have also added the IP addresses of each VM to the /etc/hosts of the machine running the node.js application.

Is there something else that needs to be done or some setting that needs to be changed so that the application can connect to the discovered orderer? Any help is appreciated.

Hymkarn
  • 21
  • 2
  • This seems to be a gRPC error establishing the connection. I'm not familiar with deployment in AWS but a couple of things that might be worth checking are that you really can establish a connection from the client machine to the IP address and port (172.31.42.206:7050) to rule out network connectivity issues, and check that the certificates for the nodes contain the host name/address at which the client is referring to them. – bestbeforetoday Sep 14 '21 at 08:33
  • Thank you for the reply! It turned out to be a problem with Docker containers not inheriting the `/etc/hosts` file of the VMs. The reason why I was using the IP addresses of the VMs in the first place was because the components (running in their own Docker containers) couldn't talk to each other using hostnames, even with the proper hostname-to-IP mapping in each VM's `/etc/hosts`. Apparently, I had to add the mappings to each component's Dockerfile (using `extra_hosts`) if I wanted the Docker containers to use those mappings. After that, I used the hostnames instead of the IPs and it worked! – Hymkarn Sep 15 '21 at 13:21

0 Answers0