Example
Create APIs that can respond to prediction requests in real-time.
Implement
mkdir text-generator && cd text-generator
touch predictor.py requirements.txt text_generator.yaml# predictor.py
from transformers import pipeline
class PythonPredictor:
def __init__(self, config):
self.model = pipeline(task="text-generation")
def predict(self, payload):
return self.model(payload["text"])[0]# requirements.txt
transformers
torch# text_generator.yaml
- name: text-generator
kind: RealtimeAPI
predictor:
type: python
path: predictor.py
compute:
gpu: 1Deploy
Monitor
Stream logs
Make a request
Delete
Last updated