Sim Studio

Hugging Face

Use Hugging Face Inference API

HuggingFace is a leading AI platform that provides access to thousands of pre-trained machine learning models and powerful inference capabilities. With its extensive model hub and robust API, HuggingFace offers comprehensive tools for both research and production AI applications. With HuggingFace, you can:

Access pre-trained models: Utilize models for text generation, translation, image processing, and more Generate AI completions: Create content using state-of-the-art language models through the Inference API Natural language processing: Process and analyze text with specialized NLP models Deploy at scale: Host and serve models for production applications Customize models: Fine-tune existing models for specific use cases

In Sim Studio, the HuggingFace integration enables your agents to programmatically generate completions using the HuggingFace Inference API. This allows for powerful automation scenarios such as content generation, text analysis, code completion, and creative writing. Your agents can generate completions with natural language prompts, access specialized models for different tasks, and integrate AI-generated content into workflows. This integration bridges the gap between your AI workflows and machine learning capabilities, enabling seamless AI-powered automation with one of the world's most comprehensive ML platforms.

Usage Instructions

Generate completions using Hugging Face Inference API with access to various open-source models. Leverage cutting-edge AI models for chat completions, content generation, and AI-powered conversations with customizable parameters.

Tools

huggingface_chat

Generate completions using Hugging Face Inference API

Input

ParameterTypeRequiredDescription
apiKeystringYesHugging Face API token
providerstringYesThe provider to use for the API request (e.g., novita, cerebras, etc.)
modelstringYesModel to use for chat completions (e.g., deepseek/deepseek-v3-0324)
contentstringYesThe user message content to send to the model
systemPromptstringNoSystem prompt to guide the model behavior
maxTokensnumberNoMaximum number of tokens to generate
temperaturenumberNoSampling temperature (0-2). Higher values make output more random
streambooleanNoWhether to stream the response

Output

ParameterType
contentstring
modelstring
usagestring
completion_tokensstring
total_tokensstring

Block Configuration

Input

ParameterTypeRequiredDescription
systemPromptstringNoSystem Prompt - Enter system prompt to guide the model behavior...

Outputs

OutputTypeDescription
responseobjectOutput from response
contentstringcontent of the response
modelstringmodel of the response
usagejsonusage of the response

Notes

  • Category: tools
  • Type: huggingface