Skip to content

Predicting online with a deployed model

Import modules

Then you need to import the following modules required in the next steps.

import requests

from aiaengine.api import app

Get endpoint URL of the deployed model

In order to use a deployed model for prediction, you need to get the endpoint URL (see an example below) once the model deployment becomes active.

get_deployment_response = client.apps.GetDeployment(
    app.GetDeploymentRequest(
        id='id_of_app_where_model_is_included',
        deployment_id='id_of_model_deployment'
    )
)

assert get_deployment_response.status == 'active'

get_endpoint_response = client.apps.GetEndpoint(
    app.GetEndpointRequest(
        id=get_deployment_response.app.id,
        endpoint_id=get_deployment_response.endpoint.id
    )
)

endpoint_url = get_endpoint_response.url

endpoint_url
>>> 'https://ep-5b21ed48-ab1f-455a-93b1-b16c77243b6c.aia-engine.pi.exchange'

Save evaluation results of predicted data into a file

Now you can make predictions on new data using endpoint_url. See our example below

with open('./path/to/new_data.csv') as file:
    data_for_prediction = file.readlines()

response = requests.post(
    endpoint_url + '/invocations',
    data=''.join(data_for_prediction).encode(),
    headers={'Content-Type': 'text/csv'}
)

assert response.status_code == 200

with open('./path/to/new_data_prediction.jsonl', 'w') as file:
    file.write(response.content.decode())