A simple canvas for language and audio models.

This is a place for quickly testing out different mixes of language and audio generation models. Everything is currently stored locally in your browser.
Get access to the beta when it's ready
Connect third party services to start adding to blocks to your canvas. Your keys are kept in local storage. All calls to third parties happen between your browser and the third party service.
In the beta you won't need to enter any keys. We will provide these integrations for you. We will also provide a way to publish your workflows and share them with others.
The following example grabs the five most recent items from an RSS feed, creates a readable summary using mixtral-8x7b-instruct, and pipes the output to an ElevenLabs tts model.
  • 0
    Use RSS Feed
    Grabs the most recent {numItems} from an RSS feed {source}.
  • 1
    Invoke LLM (OpenRouter)
    Call language model {model} with {messages}. Use template `{previousBlockResult}` to insert the output of the previous block.
  • 2
    Create Audio (ElevenLabs)
    Generates an audio file from {text} using {model}. Use {previousBlockResult} to insert the output of the previous block.