3

Im trying to invoke a function in a mlrun but getting above error. can anyone please help me with that.im attaching code here...

from cloudpickle import load
import numpy as np
from typing import List
import mlrun

class ClassifierModel(mlrun.serving.V2ModelServer):
    def load(self):
        """load and initialize the model and/or other elements"""
        model_file, extra_data = self.get_model('.pkl')
        self.model = load(open(model_file, 'rb'))

    def predict(self, body: dict) -> List:
        """Generate model predictions from sample."""
        feats = np.asarray(body['inputs'])
        result: np.ndarray = self.model.predict(feats)
        return result.tolist()

#The following code converts the ClassifierModel class that you defined in the previous step to a serving function. The name of the class to be used by the serving function is set in spec.default_class.

serving_fn = mlrun.code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'

model_file = project.get_artifact_uri('my_model') 
serving_fn.add_model('my_model',model_path=model_file)

#Testing Your Function Locally

my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''

server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)


# Building and Deploying the Serving Function¶

function_address = serving_fn.deploy()

print (f'The address for the function is {function_address} \n')

!curl $function_address

# Now we will try to invoke our serving function

serving_fn.invoke('/v2/models/my_model/infer', my_data)

OSError: error: cannot get build status, HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/build/status?name=serving&project=getting-started-jovyan&tag=&logs=yes&offset=0&last_log_timestamp=1664873747518.8518&verbose=no (Caused by ReadTimeoutError("HTTPConnectionPool(host='localhost', port=8080): Read timed out. (read timeout=45)"))
AKX
  • 152,115
  • 15
  • 115
  • 172

2 Answers2

2

By the looks of it, there's nothing listening on localhost:8080, even though there should be.

According to the getting started guide there should be an "MLRun Backend Service", presumably on that address by default. I suspect you haven't started the service.

AKX
  • 152,115
  • 15
  • 115
  • 172
0

The address localhost:8080 in not accessible from docker-composer, it means you have to do MLRun installation to the different IP address. I see two steps, how to solve the issue:

Relevant installation

The MLRun Community Edition in desktop docker has to be install under relevant HOST_IP (not with localhost or 127.0.0.1, but with stable IP address, see ipconfig) and with relevant SHARED_DIR. See relevant command line (from OS windows):

set HOST_IP=192.168.0.150
set SHARED_DIR=c:\Apps\mlrun-data
set TAG=1.2.0

mkdir %SHARED_DIR%

docker-compose -f "c:\Apps\mlrun\compose.with-jupyter.yaml" up

BTW: YAML file see https://docs.mlrun.org/en/latest/install/local-docker.html

2. Access to the port

In case of call serving_fn.invoke you have to open relevant port (from deploy_function) on your IP address (based on setting of HOST_IP, see the first point).

Typically this port can be blocked based on your firewall policy or your local antivirus. It means, you have to open access to this port before invoke call.

BTW:

JIST
  • 1,139
  • 2
  • 8
  • 30