# Example

Create APIs that process your workloads asynchronously.

## Implementation

Create a folder for your API. In this case, we are deploying an iris-classifier AsyncAPI. This folder will have the following structure:

```
./iris-classifier
├── cortex.yaml
├── handler.py
└── requirements.txt
```

We will now create the necessary files:

```bash
mkdir iris-classifier && cd iris-classifier
touch handler.py requirements.txt cortex.yaml
```

```python
# handler.py

import os
import pickle
from typing import Dict, Any

import boto3
from botocore import UNSIGNED
from botocore.client import Config

labels = ["setosa", "versicolor", "virginica"]


class Handler:
    def __init__(self, config):
        s3 = boto3.client("s3")
        s3.download_file(config["bucket"], config["key"], "/tmp/model.pkl")
        self.model = pickle.load(open("/tmp/model.pkl", "rb"))

    def handle_async(self, payload: Dict[str, Any]) -> Dict[str, str]:
        measurements = [
            payload["sepal_length"],
            payload["sepal_width"],
            payload["petal_length"],
            payload["petal_width"],
        ]

        label_id = self.model.predict([measurements])[0]

        # result must be json serializable
        return {"label": labels[label_id]}
```

```python
# requirements.txt

boto3
```

```yaml
# text_generator.yaml

- name: iris-classifier
  kind: AsyncAPI
  handler:
    type: python
    path: handler.py
```

## Deploy

We can now deploy our API with the `cortex deploy` command. This command can be re-run to update your API configuration or handler implementation.

```bash
cortex deploy cortex.yaml

# creating iris-classifier (AsyncAPI)
#
# cortex get                  (show api statuses)
# cortex get iris-classifier  (show api info)
```

## Monitor

To check whether the deployed API is ready, we can run the `cortex get` command with the `--watch` flag.

```bash
cortex get iris-classifier --watch

# status     up-to-date   requested   last update
# live       1            1           10s
#
# endpoint: http://<load_balancer_url>/iris-classifier
#
# api id                                                         last deployed
# 6992e7e8f84469c5-d5w1gbvrm5-25a7c15c950439c0bb32eebb7dc84125   10s
```

## Submit a workload

Now we want to submit a workload to our deployed API. We will start by creating a file with a JSON request payload, in the format expected by our `iris-classifier` handler implementation.

This is the JSON file we will submit to our iris-classifier API.

```bash
# sample.json
{
    "sepal_length": 5.2,
    "sepal_width": 3.6,
    "petal_length": 1.5,
    "petal_width": 0.3
}
```

Once we have our sample request payload, we will submit it with a `POST` request to the endpoint URL previously displayed in the `cortex get` command. We will quickly get a request `id` back.

```bash
curl -X POST http://<load_balancer_url>/iris-classifier -H "Content-Type: application/json" -d '@./sample.json'

# {"id": "659938d2-2ef6-41f4-8983-4e0b7562a986"}
```

## Retrieve the result

The obtained request id will allow us to check the status of the running payload and retrieve its result. To do so, we submit a `GET` request to the same endpoint URL with an appended `/<id>`.

```bash
curl http://<load_balancer_url>/iris-classifier/<id>  # <id> is the request id that was returned in the previous POST request

# {
#   "id": "659938d2-2ef6-41f4-8983-4e0b7562a986",
#   "status": "completed",
#   "result": {"label": "setosa"},
#   "timestamp": "2021-03-16T15:50:50+00:00"
# }
```

Depending on the status of your workload, you will get different responses back. The possible workload status are `in_queue | in_progress | failed | completed`. The `result` and `timestamp` keys are returned if the status is `completed`.

It is also possible to setup a webhook in your handler to get the response sent to a pre-defined web server once the workload completes or fails. You can read more about it in the [webhook documentation](/0.34/workloads/async-apis/webhooks.md).

## Stream logs

If necessary, you can stream the logs from a random running pod from your API with the `cortex logs` command. This is intended for debugging purposes only. For production logs, you can view the logs in cloudwatch logs.

```bash
cortex logs iris-classifier
```

## Delete the API

Finally, you can delete your API with a simple `cortex delete` command.

```bash
cortex delete iris-classifier
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortexlabs.com/0.34/workloads/async-apis/example.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
