Quick Start Guide
Welcome to Infyr.AI, a decentralized, serverless AI inferencing platform built on the Solana blockchain. This guide will help you get started quickly.
Overview
Infyr.AI solves GPU shortage challenges and high pricing through a decentralized exchange that rewards GPU miners for securely handling LLM requests. Users pay with SOL tokens, while encrypted methods ensure compliance and privacy.
Getting Started
1. Installation
Choose your preferred language SDK to get started with Infyr.AI:
Python
pip install openai
JavaScript/Node.js
npm install openai
2. Authentication
To use Infyr.AI, you need an API key:
- Visit Infyr.AI (opens in a new tab)
- Login with email or Phantom wallet
- Go to Dashboard → My Credits → Connect Wallet → Transfer SOL
- Go to API Keys → Generate New API Key → Copy API Key
Available Models
Infyr.AI offers a comprehensive range of AI models across different modalities:
- Text Generation: DeepSeek, Llama, Hermes for chat and text generation
- Vision Models: Multi-modal models for image understanding
- Audio Models: Whisper for speech-to-text, PlayAI TTS for text-to-speech
- Video Generation: Veo3 and Pixverse for creating videos from text
- Embedding Models: For semantic search and similarity tasks
For detailed pricing, visit https://infyr.ai/#pricing (opens in a new tab)
Quick Start Examples
Text Generation
from openai import OpenAI
client = OpenAI(
base_url="https://api.infyr.ai/v1",
api_key="YOUR_API_KEY"
)
# Chat completion
response = client.chat.completions.create(
model="deepseek-70b",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
)
print(response.choices[0].message.content)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.infyr.ai/v1',
});
const response = await openai.chat.completions.create({
model: 'deepseek-70b',
messages: [
{ role: 'user', content: 'Write a Python function to sort a list' }
],
});
console.log(response.choices[0].message.content);
Vision Analysis
# Analyze an image
response = client.chat.completions.create(
model="llama-3.2-vision",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What do you see in this image?"},
{
"type": "image_url",
"image_url": {"url": "https://example.com/image.jpg"}
}
]
}
]
)
Audio Processing
# Speech-to-text transcription
with open("audio.mp3", "rb") as audio_file:
response = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
print("Transcription:", response.text)
# Text-to-speech with HTTP API
curl -X POST "https://api.infyr.ai/v1/audio/generations" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "playai/tts/v3",
"input": "Hello from Infyr.AI!",
"voice": "Jennifer (English (US)/American)"
}'
Video Generation
# Generate video from text
response = client.chat.completions.create(
model="veo3",
messages=[
{
"role": "user",
"content": "A serene lake at sunset with mountains in the background"
}
],
max_tokens=1,
extra_body={
"duration": "5s",
"aspect_ratio": "16:9"
}
)
print("Video request ID:", response.choices[0].message.content)
Embeddings
# Generate text embeddings
response = client.embeddings.create(
model="text-embedding-3-large",
input="Your text to embed here"
)
print("Embedding vector:", response.data[0].embedding)
Language SDK Examples
For detailed SDK examples, check out:
Model Documentation
For comprehensive model specifications, capabilities, and advanced examples:
- Text Models - Chat, completion, and reasoning models
- Vision Models - Multi-modal image understanding
- Audio Models - Speech-to-text and text-to-speech
- Video Models - Text-to-video generation
- Embedding Models - Semantic search and similarity
Next Steps
- Visit our GitHub repository (opens in a new tab) for more code examples