Implementation Examples

This section provides practical examples of how to call the Cinclus LLM On-Demand API and integrate it into applications. We will demonstrate simple request/response patterns and show how to build a basic chatbot using the API.

Example 1: Generating Text (Python)

Below is a basic example using Python and the requests library to call the Text Completions API. In this example, we send a prompt and receive a completion.

import requests

API_KEY = "YOUR_API_KEY"
url = "https://cinclus.api.wayscloud.services/v1/completions"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
data = {
"model": "cinclus-text-001",
"prompt": "Once upon a time, ",
"max_tokens": 50,
"temperature": 0.7
}

response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
result = response.json()
text = result["choices"][0]["text"]
print("Generated text:", text)
else:
print("Request failed:", response.status_code, response.text)

Explanation: We set up headers, payload, and make a POST request. On success, we extract and print the generated text.

Example 2: Basic Chatbot Loop (Python) Using the Chat Completions API, you can create an interactive chatbot.

import requests

API_KEY = "YOUR_API_KEY"
url = "[https://cinclus.api.wayscloud.services/v1/chat/completions](https://cinclus.api.wayscloud.services/v1/chat/completions)"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}

messages = [{"role": "system", "content": "You are a helpful assistant."}]
print("Chatbot: Hello! I'm your assistant. Ask me anything (type 'exit' to quit).")

while True:
user_input = input("You: ")
if user_input.strip().lower() in ["exit", "quit", "q"]:
print("Chatbot: Goodbye!")
break
messages.append({"role": "user", "content": user_input})
payload = {"model": "cinclus-chat-001", "messages": messages, "max_tokens": 100, "temperature": 0.7}
response = requests.post(url, headers=headers, json=payload)
if response.status_code != 200:
print(f"Error: {response.status_code}, {response.text}")
continue
result = response.json()
assistant_message = result["choices"][0]["message"]["content"]
print(f"Chatbot: {assistant_message}")
messages.append({"role": "assistant", "content": assistant_message})

Explanation: We maintain a conversation history, send it with each request, and print the chatbot's response.

Example 3: Using cURL (Command Line) For quick tests, use cURL to get embeddings:

curl -X POST [https://cinclus.api.wayscloud.services/v1/embeddings](https://cinclus.api.wayscloud.services/v1/embeddings) \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "cinclus-embed-001",
"input": "Sample text to embed"
}'

Explanation: Sends an embedding request and outputs the embedding vector.

Language and Platform Support Cinclus API is HTTP+JSON, usable from any language. Set timeouts, handle retries, and secure API keys. With these examples, you're ready to build with the Cinclus LLM On-Demand API. For scaling, see Offloading & Scaling.