Skip to main content

Command Palette

Search for a command to run...

Build your own Artifact Previewer using Local LLM

Updated
โ€ข1 min read
Build your own Artifact Previewer using Local LLM
Y

Backend Engineer ๐Ÿš€ Cloud Native Enthusiast โ˜

Large Language Models are great at generating code, but we need to go an extra mile to run them; well not any more.

In this video, we will be building our application that can use both local and hosted large language models (Gemini, ChatGPT, and so on) for generating code, and then we'll build our own sandbox to run that code and preview the output. This is similar to what you can experience with ChatGPT Canvas and Claude Artifact; but in our case, we have complete end-to-end control over the environment, dependencies, and the entire workflow. Go beyond generation โ€“ execute and iterate with confidence! ๐Ÿš€