55 Integrations with bolt.diy

View a list of bolt.diy integrations and software that integrates with bolt.diy below. Compare the best bolt.diy integrations as well as features, ratings, user reviews, and pricing of software that integrates with bolt.diy. Here are the current bolt.diy integrations in 2026:

  • 1
    OpenRouter

    OpenRouter

    OpenRouter

    OpenRouter is a unified interface for LLMs. OpenRouter scouts for the lowest prices and best latencies/throughputs across dozens of providers, and lets you choose how to prioritize them. No need to change your code when switching between models or providers. You can even let users choose and pay for their own. Evals are flawed; instead, compare models by how often they're used for different purposes. Chat with multiple at once in the chatroom. Model usage can be paid by users, developers, or both, and may shift in availability. You can also fetch models, prices, and limits via API. OpenRouter routes requests to the best available providers for your model, given your preferences. By default, requests are load-balanced across the top providers to maximize uptime, but you can customize how this works using the provider object in the request body. Prioritize providers that have not seen significant outages in the last 10 seconds.
    Starting Price: $2 one-time payment
  • 2
    MATLAB

    MATLAB

    The MathWorks

    MATLAB® combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly. It includes the Live Editor for creating scripts that combine code, output, and formatted text in an executable notebook. MATLAB toolboxes are professionally developed, rigorously tested, and fully documented. MATLAB apps let you see how different algorithms work with your data. Iterate until you’ve got the results you want, then automatically generate a MATLAB program to reproduce or automate your work. Scale your analyses to run on clusters, GPUs, and clouds with only minor code changes. There’s no need to rewrite your code or learn big data programming and out-of-memory techniques. Automatically convert MATLAB algorithms to C/C++, HDL, and CUDA code to run on your embedded processor or FPGA/ASIC. MATLAB works with Simulink to support Model-Based Design.
  • 3
    OpenAI

    OpenAI

    OpenAI

    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. Apply our API to any language task — semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples or by specifying your task in English. One simple integration gives you access to our constantly-improving AI technology. Explore how you integrate with the API with these sample completions.
  • 4
    Gemini

    Gemini

    Google

    Gemini is Google’s advanced AI assistant designed to help users think, create, learn, and complete tasks with a new level of intelligence. Powered by Google’s most capable models, including Gemini 3, it enables users to ask complex questions, generate content, analyze information, and explore ideas through natural conversation. Gemini can create images, videos, summaries, study plans, and first drafts while also providing feedback on uploaded files and written work. The platform is grounded in Google Search, allowing it to deliver accurate, up-to-date information and support deep follow-up questions. Gemini connects seamlessly with Google apps like Gmail, Docs, Calendar, Maps, YouTube, and Photos to help users complete tasks without switching tools. Features such as Gemini Live, Deep Research, and Gems enhance brainstorming, research, and personalized workflows. Available through flexible free and paid plans, Gemini supports everyday users, students, and professionals across devices.
    Starting Price: Free
  • 5
    DeepSeek

    DeepSeek

    DeepSeek

    DeepSeek is a cutting-edge AI assistant powered by the advanced DeepSeek-V3 model, featuring over 600 billion parameters for exceptional performance. Designed to compete with top global AI systems, it offers fast responses and a wide range of features to make everyday tasks easier and more efficient. Available across multiple platforms, including iOS, Android, and the web, DeepSeek ensures accessibility for users everywhere. The app supports multiple languages and has been continually updated to improve functionality, add new language options, and resolve issues. With its seamless performance and versatility, DeepSeek has garnered positive feedback from users worldwide.
    Starting Price: Free
  • 6
    Gemini Advanced
    Gemini Advanced is a cutting-edge AI model designed for unparalleled performance in natural language understanding, generation, and problem-solving across diverse domains. Featuring a revolutionary neural architecture, it delivers exceptional accuracy, nuanced contextual comprehension, and deep reasoning capabilities. Gemini Advanced is engineered to handle complex, multifaceted tasks, from creating detailed technical content and writing code to conducting in-depth data analysis and providing strategic insights. Its adaptability and scalability make it a powerful solution for both individual users and enterprise-level applications. Gemini Advanced sets a new standard for intelligence, innovation, and reliability in AI-powered solutions. You'll also get access to Gemini in Gmail, Docs, and more, 2 TB storage, and other benefits from Google One. Gemini Advanced also offers access to Gemini with Deep Research. You can conduct in-depth and real-time research on almost any subject.
    Starting Price: $19.99 per month
  • 7
    Mistral AI

    Mistral AI

    Mistral AI

    Mistral AI is a pioneering artificial intelligence startup specializing in open-source generative AI. The company offers a range of customizable, enterprise-grade AI solutions deployable across various platforms, including on-premises, cloud, edge, and devices. Flagship products include "Le Chat," a multilingual AI assistant designed to enhance productivity in both personal and professional contexts, and "La Plateforme," a developer platform that enables the creation and deployment of AI-powered applications. Committed to transparency and innovation, Mistral AI positions itself as a leading independent AI lab, contributing significantly to open-source AI and policy development.
    Starting Price: Free
  • 8
    React

    React

    React

    React makes it painless to create interactive UIs. Design simple views for each state in your application, and React will efficiently update and render just the right components when your data changes. Declarative views make your code more predictable and easier to debug. Build encapsulated components that manage their own state, then compose them to make complex UIs. Since component logic is written in JavaScript instead of templates, you can easily pass rich data through your app and keep state out of the DOM. We don’t make assumptions about the rest of your technology stack, so you can develop new features in React without rewriting existing code. React components implement a render() method that takes input data and returns what to display. This example uses an XML-like syntax called JSX. Input data that is passed into the component can be accessed by render() via this.props.
    Starting Price: Free
  • 9
    Java

    Java

    Oracle

    The Java™ Programming Language is a general-purpose, concurrent, strongly typed, class-based object-oriented language. It is normally compiled to the bytecode instruction set and binary format defined in the Java Virtual Machine Specification. In the Java programming language, all source code is first written in plain text files ending with the .java extension. Those source files are then compiled into .class files by the javac compiler. A .class file does not contain code that is native to your processor; it instead contains bytecodes — the machine language of the Java Virtual Machine1 (Java VM). The java launcher tool then runs your application with an instance of the Java Virtual Machine.
    Starting Price: Free
  • 10
    Claude

    Claude

    Anthropic

    Claude is a next-generation AI assistant developed by Anthropic to help individuals and teams solve complex problems with safety, accuracy, and reliability at its core. It is designed to support a wide range of tasks, including writing, editing, coding, data analysis, and research. Claude allows users to create and iterate on documents, websites, graphics, and code directly within chat using collaborative tools like Artifacts. The platform supports file uploads, image analysis, and data visualization to enhance productivity and understanding. Claude is available across web, iOS, and Android, making it accessible wherever work happens. With built-in web search and extended reasoning capabilities, Claude helps users find information and think through challenging problems more effectively. Anthropic emphasizes security, privacy, and responsible AI development to ensure Claude can be trusted in professional and personal workflows.
    Starting Price: Free
  • 11
    GPT-4o

    GPT-4o

    OpenAI

    GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time (opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
    Starting Price: $5.00 / 1M tokens
  • 12
    Python

    Python

    Python

    The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.
    Starting Price: Free
  • 13
    Gemini 2.0
    Gemini 2.0 is an advanced AI-powered model developed by Google, designed to offer groundbreaking capabilities in natural language understanding, reasoning, and multimodal interactions. Building on the success of its predecessor, Gemini 2.0 integrates large language processing with enhanced problem-solving and decision-making abilities, enabling it to interpret and generate human-like responses with greater accuracy and nuance. Unlike traditional AI models, Gemini 2.0 is trained to handle multiple data types simultaneously, including text, images, and code, making it a versatile tool for research, business, education, and creative industries. Its core improvements include better contextual understanding, reduced bias, and a more efficient architecture that ensures faster, more reliable outputs. Gemini 2.0 is positioned as a major step forward in the evolution of AI, pushing the boundaries of human-computer interaction.
    Starting Price: Free
  • 14
    MaxAI

    MaxAI

    MaxAI

    MaxAI is an all-in-one AI-powered Chrome extension designed to supercharge productivity. It integrates seamlessly into your browser, enabling users to instantly interact with AI models like GPT, Claude, and Gemini to enhance work processes. MaxAI offers a range of tools including AI-driven writing improvements, summarizations, translations, and real-time answers directly from webpages. It allows users to manage prompts, automate tasks, and create personalized responses for emails or social media with just one click. MaxAI’s versatility makes it a must-have for anyone looking to boost productivity and efficiency while navigating the web.
  • 15
    Gemini Pro
    Gemini is natively multimodal, which gives you the potential to transform any type of input into any type of output. We've built Gemini responsibly from the start, incorporating safeguards and working together with partners to make it safer and more inclusive. Integrate Gemini models into your applications with Google AI Studio and Google Cloud Vertex AI.
  • 16
    Gemini 2.0 Flash
    The Gemini 2.0 Flash AI model represents the next generation of high-speed, intelligent computing, designed to set new benchmarks in real-time language processing and decision-making. Building on the robust foundation of its predecessor, it incorporates enhanced neural architecture and breakthrough advancements in optimization, enabling even faster and more accurate responses. Gemini 2.0 Flash is designed for applications requiring instantaneous processing and adaptability, such as live virtual assistants, automated trading systems, and real-time analytics. Its lightweight, efficient design ensures seamless deployment across cloud, edge, and hybrid environments, while its improved contextual understanding and multitasking capabilities make it a versatile tool for tackling complex, dynamic workflows with precision and speed.
  • 17
    Gemini Nano
    Gemini Nano from Google is a lightweight, energy-efficient AI model designed for high performance in compact, resource-constrained environments. Tailored for edge computing and mobile applications, Gemini Nano combines Google's advanced AI architecture with cutting-edge optimization techniques to deliver seamless performance without compromising speed or accuracy. Despite its compact size, it excels in tasks like voice recognition, natural language processing, real-time translation, and personalized recommendations. With a focus on privacy and efficiency, Gemini Nano processes data locally, minimizing reliance on cloud infrastructure while maintaining robust security. Its adaptability and low power consumption make it an ideal choice for smart devices, IoT ecosystems, and on-the-go AI solutions.
  • 18
    Gemini 1.5 Pro
    The Gemini 1.5 Pro AI model is a state-of-the-art language model designed to deliver highly accurate, context-aware, and human-like responses across a variety of applications. Built with cutting-edge neural architecture, it excels in natural language understanding, generation, and reasoning tasks. The model is fine-tuned for versatility, supporting tasks like content creation, code generation, data analysis, and complex problem-solving. Its advanced algorithms ensure nuanced comprehension, enabling it to adapt to different domains and conversational styles seamlessly. With a focus on scalability and efficiency, the Gemini 1.5 Pro is optimized for both small-scale implementations and enterprise-level integrations, making it a powerful tool for enhancing productivity and innovation.
  • 19
    Gemini 1.5 Flash
    The Gemini 1.5 Flash AI model is an advanced, high-speed language model engineered for lightning-fast processing and real-time responsiveness. Designed to excel in dynamic and time-sensitive applications, it combines streamlined neural architecture with cutting-edge optimization techniques to deliver exceptional performance without compromising on accuracy. Gemini 1.5 Flash is tailored for scenarios requiring rapid data processing, instant decision-making, and seamless multitasking, making it ideal for chatbots, customer support systems, and interactive applications. Its lightweight yet powerful design ensures it can be deployed efficiently across a range of platforms, from cloud-based environments to edge devices, enabling businesses to scale their operations with unmatched agility.
  • 20
    Mistral 7B

    Mistral 7B

    Mistral AI

    Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.
    Starting Price: Free
  • 21
    Codestral Mamba
    As a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialized in code generation, available under an Apache 2.0 license. Codestral Mamba is another step in our effort to study and provide new architectures. It is available for free use, modification, and distribution, and we hope it will open new perspectives in architecture research. Mamba models offer the advantage of linear time inference and the theoretical ability to model sequences of infinite length. It allows users to engage with the model extensively with quick responses, irrespective of the input length. This efficiency is especially relevant for code productivity use cases, this is why we trained this model with advanced code and reasoning capabilities, enabling it to perform on par with SOTA transformer-based models.
    Starting Price: Free
  • 22
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
    Starting Price: Free
  • 23
    Mixtral 8x22B

    Mixtral 8x22B

    Mistral AI

    Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.
    Starting Price: Free
  • 24
    Mathstral

    Mathstral

    Mistral AI

    As a tribute to Archimedes, whose 2311th anniversary we’re celebrating this year, we are proud to release our first Mathstral model, a specific 7B model designed for math reasoning and scientific discovery. The model has a 32k context window published under the Apache 2.0 license. We’re contributing Mathstral to the science community to bolster efforts in advanced mathematical problems requiring complex, multi-step logical reasoning. The Mathstral release is part of our broader effort to support academic projects, it was produced in the context of our collaboration with Project Numina. Akin to Isaac Newton in his time, Mathstral stands on the shoulders of Mistral 7B and specializes in STEM subjects. It achieves state-of-the-art reasoning capacities in its size category across various industry-standard benchmarks. In particular, it achieves 56.6% on MATH and 63.47% on MMLU, with the following MMLU performance difference by subject between Mathstral 7B and Mistral 7B.
    Starting Price: Free
  • 25
    Ministral 3B

    Ministral 3B

    Mistral AI

    Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.
    Starting Price: Free
  • 26
    Ministral 8B

    Ministral 8B

    Mistral AI

    Mistral AI has introduced two advanced models for on-device computing and edge applications, named "les Ministraux": Ministral 3B and Ministral 8B. These models excel in knowledge, commonsense reasoning, function-calling, and efficiency within the sub-10B parameter range. They support up to 128k context length and are designed for various applications, including on-device translation, offline smart assistants, local analytics, and autonomous robotics. Ministral 8B features an interleaved sliding-window attention pattern for faster and more memory-efficient inference. Both models can function as intermediaries in multi-step agentic workflows, handling tasks like input parsing, task routing, and API calls based on user intent with low latency and cost. Benchmark evaluations indicate that les Ministraux consistently outperforms comparable models across multiple tasks. As of October 16, 2024, both models are available, with Ministral 8B priced at $0.1 per million tokens.
    Starting Price: Free
  • 27
    Mistral Small

    Mistral Small

    Mistral AI

    On September 17, 2024, Mistral AI announced several key updates to enhance the accessibility and performance of their AI offerings. They introduced a free tier on "La Plateforme," their serverless platform for tuning and deploying Mistral models as API endpoints, enabling developers to experiment and prototype at no cost. Additionally, Mistral AI reduced prices across their entire model lineup, with significant cuts such as a 50% reduction for Mistral Nemo and an 80% decrease for Mistral Small and Codestral, making advanced AI more cost-effective for users. The company also unveiled Mistral Small v24.09, a 22-billion-parameter model offering a balance between performance and efficiency, suitable for tasks like translation, summarization, and sentiment analysis. Furthermore, they made Pixtral 12B, a vision-capable model with image understanding capabilities, freely available on "Le Chat," allowing users to analyze and caption images without compromising text-based performance.
    Starting Price: Free
  • 28
    Node.js

    Node.js

    Node.js

    As an asynchronous event-driven JavaScript runtime, Node.js is designed to build scalable network applications. Upon each connection, the callback is fired, but if there is no work to be done, Node.js will sleep. This is in contrast to today's more common concurrency model, in which OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node.js are free from worries of dead-locking the process, since there are no locks. Almost no function in Node.js directly performs I/O, so the process never blocks except when the I/O is performed using synchronous methods of Node.js standard library. Because nothing blocks, scalable systems are very reasonable to develop in Node.js. Node.js is similar in design to, and influenced by, systems like Ruby's Event Machine and Python's Twisted. Node.js takes the event model a bit further. It presents an event loop as a runtime construct instead of as a library.
    Starting Price: Free
  • 29
    Hugging Face

    Hugging Face

    Hugging Face

    Hugging Face is a leading platform for AI and machine learning, offering a vast hub for models, datasets, and tools for natural language processing (NLP) and beyond. The platform supports a wide range of applications, from text, image, and audio to 3D data analysis. Hugging Face fosters collaboration among researchers, developers, and companies by providing open-source tools like Transformers, Diffusers, and Tokenizers. It enables users to build, share, and access pre-trained models, accelerating AI development for a variety of industries.
    Starting Price: $9 per month
  • 30
    CSS

    CSS

    CSS

    CSS, short for Cascading Style Sheets, is a style sheet language used by web developers to structure the HTML and other elements of a website. CSS is one of the most widely used languages on the web. For style sheets to work, it is important that your markup be free of errors. A convenient way to automatically fix markup errors is to use the HTML Tidy utility. This also tidies the markup making it easier to read and easier to edit. I recommend you regularly run Tidy over any markup you are editing. Tidy is very effective at cleaning up markup created by authoring tools with sloppy habits. Each style property starts with the property's name, then a colon and lastly the value for this property. When there is more than one style property in the list, you need to use a semicolon between each of them to delimit one property from the next.
    Starting Price: Free
  • 31
    Kotlin

    Kotlin

    Kotlin

    Easy to pick up, so you can create powerful applications immediately. Compatible with the Java ecosystem. Use your favorite JVM frameworks and libraries. Share application logic between web, mobile, and desktop platforms while keeping an experience native to users. Save time and get the benefit of unlimited access to features specific to these platforms. Kotlin has great support and many contributors in its fast-growing global community. Enjoy the benefits of a rich ecosystem with a wide range of community libraries. Help is never far away — consult extensive community resources or ask the Kotlin team directly. Kotlin Multiplatform Mobile is an SDK for iOS and Android app development. It offers all the combined benefits of creating cross-platform and native apps. Maintain a single codebase for networking, data storage, analytics, and the other logic of your Android and iOS apps.
    Starting Price: Free
  • 32
    PHP

    PHP

    PHP

    Fast, flexible and pragmatic, PHP powers everything from your blog to the most popular websites in the world. The PHP development team announces the immediate availability of PHP 8.0.20. When using the PHP.net website, there is even no need to get to a search box to access the content you would like to see quickly. You can use short PHP.net URLs to access pages directly.
    Starting Price: Free
  • 33
    Swift

    Swift

    Apple

    Writing Swift code is interactive and fun, the syntax is concise yet expressive, and Swift includes modern features developers love. Swift code is safe by design and produces software that runs lightning-fast. Swift is the result of the latest research on programming languages, combined with decades of experience building Apple platforms. Named parameters are expressed in a clean syntax that makes APIs in Swift even easier to read and maintain. Even better, you don’t even need to type semi-colons. Inferred types make code cleaner and less prone to mistakes, while modules eliminate headers and provide namespaces. To best support international languages and emoji, Strings are Unicode-correct and use a UTF-8 based encoding to optimize performance for a wide-variety of use cases. You can even write concurrent code with simple, built-in keywords that define asynchronous behavior, making your code more readable and less error-prone.
    Starting Price: Free
  • 34
    TypeScript

    TypeScript

    TypeScript

    TypeScript adds additional syntax to JavaScript to support a tighter integration with your editor. Catch errors early in your editor. TypeScript code converts to JavaScript, which runs anywhere JavaScript runs: In a browser, on Node.js or Deno and in your apps. TypeScript understands JavaScript and uses type inference to give you great tooling without additional code. TypeScript was used by 78% of the 2020 State of JS respondents, with 93% saying they would use it again. The most common kinds of errors that programmers write can be described as type errors: a certain kind of value was used where a different kind of value was expected. This could be due to simple typos, a failure to understand the API surface of a library, incorrect assumptions about runtime behavior, or other errors.
    Starting Price: Free
  • 35
    R

    R

    The R Foundation

    R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R’s strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed.
    Starting Price: Free
  • 36
    Rust

    Rust

    Rust

    Rust is blazingly fast and memory-efficient: with no runtime or garbage collector, it can power performance-critical services, run on embedded devices, and easily integrate with other languages. Rust’s rich type system and ownership model guarantee memory-safety and thread-safety — enabling you to eliminate many classes of bugs at compile-time. Rust has great documentation, a friendly compiler with useful error messages, and top-notch tooling — an integrated package manager and build tool, smart multi-editor support with auto-completion and type inspections, an auto-formatter, and more. Whip up a CLI tool quickly with Rust’s robust ecosystem. Rust helps you maintain your app with confidence and distribute it with ease. Use Rust to supercharge your JavaScript, one module at a time. Publish to npm, bundle with webpack, and you’re off to the races.
    Starting Price: Free
  • 37
    Go

    Go

    Golang

    With a strong ecosystem of tools and APIs on major cloud providers, it is easier than ever to build services with Go. With popular open source packages and a robust standard library, use Go to create fast and elegant CLIs. With enhanced memory performance and support for several IDEs, Go powers fast and scalable web applications. With fast build times, lean syntax, an automatic formatter and doc generator, Go is built to support both DevOps and SRE. Everything there is to know about Go. Get started on a new project or brush up for your existing Go code. An interactive introduction to Go in three sections. Each section concludes with a few exercises so you can practice what you've learned. The Playground allows anyone with a web browser to write Go code that we immediately compile, link, and run on our servers.
    Starting Price: Free
  • 38
    Markdown

    Markdown

    Markdown

    Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML). Thus, “Markdown” is two things: (1) a plain text formatting syntax; and (2) a software tool, written in Perl, that converts the plain text formatting to HTML. See the Syntax page for details pertaining to Markdown’s formatting syntax. You can try it out, right now, using the online Dingus. The overriding design goal for Markdown’s formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdown’s syntax is the format of plain text email.
    Starting Price: Free
  • 39
    Grok

    Grok

    xAI

    Grok is an AI modeled after the Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask! Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor! A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.
    Starting Price: Free
  • 40
    Ollama

    Ollama

    Ollama

    Ollama is an innovative platform that focuses on providing AI-powered tools and services, designed to make it easier for users to interact with and build AI-driven applications. Run AI models locally. By offering a range of solutions, including natural language processing models and customizable AI features, Ollama empowers developers, businesses, and organizations to integrate advanced machine learning technologies into their workflows. With an emphasis on usability and accessibility, Ollama strives to simplify the process of working with AI, making it an appealing option for those looking to harness the potential of artificial intelligence in their projects.
    Starting Price: Free
  • 41
    Mixtral 8x7B

    Mixtral 8x7B

    Mistral AI

    Mixtral 8x7B is a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT-3.5 on most standard benchmarks.
    Starting Price: Free
  • 42
    Codestral

    Codestral

    Mistral AI

    We introduce Codestral, our first-ever code model. Codestral is an open-weight generative AI model explicitly designed for code generation tasks. It helps developers write and interact with code through a shared instruction and completion API endpoint. As it masters code and English, it can be used to design advanced AI applications for software developers. Codestral is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash. It also performs well on more specific ones like Swift and Fortran. This broad language base ensures Codestral can assist developers in various coding environments and projects.
    Starting Price: Free
  • 43
    Mistral Large

    Mistral Large

    Mistral AI

    Mistral Large is Mistral AI's flagship language model, designed for advanced text generation and complex multilingual reasoning tasks, including text comprehension, transformation, and code generation. It supports English, French, Spanish, German, and Italian, offering a nuanced understanding of grammar and cultural contexts. With a 32,000-token context window, it can accurately recall information from extensive documents. The model's precise instruction-following and native function-calling capabilities facilitate application development and tech stack modernization. Mistral Large is accessible through Mistral's platform, Azure AI Studio, and Azure Machine Learning, and can be self-deployed for sensitive use cases. Benchmark evaluations indicate that Mistral Large achieves strong results, making it the world's second-ranked model generally available through an API, next to GPT-4.
    Starting Price: Free
  • 44
    Gemini Enterprise
    Gemini Enterprise is a comprehensive AI platform built by Google Cloud designed to bring the full power of Google’s advanced AI models, agent-creation tools, and enterprise-grade data access into everyday workflows. The solution offers a unified chat interface that lets employees interact with internal documents, applications, data sources, and custom AI agents. At its core, Gemini Enterprise comprises six key components: the Gemini family of large multimodal models, an agent orchestration workbench (formerly Google Agentspace), pre-built starter agents, robust data-integration connectors to business systems, extensive security and governance controls, and a partner ecosystem for tailored integrations. It is engineered to scale across departments and enterprises, enabling users to build no-code or low-code agents that automate tasks, such as research synthesis, customer support response, code assist, contract analysis, and more, while operating within corporate compliance standards.
    Starting Price: $21 per month
  • 45
    Ruby

    Ruby

    Ruby

    Ruby’s here to answer your calls and connect with your website visitors, so you can focus on your business. We never call in sick. We never go on vacation. We are always on. From full-time to just-when-you-need-it, Ruby’s virtual receptionists have got you covered—making the most out of every customer conversation. Ruby can work as a full-time extension of your team. Call answering, routing and transferring, customer intake, messages, and more are all included. Send calls to Ruby, straight to you, or any other number you choose with call forwarding. Have us hold calls with one tap, or set Ruby as backup—we’ll answer only if you don’t. Update receptionists on your preferred call answering instructions with the status function, sync your day’s schedule with your call handling using Ruby’s calendar integration, and provide messages you’d like relayed to your callers.
    Starting Price: $349 per month
  • 46
    JavaScript

    JavaScript

    JavaScript

    JavaScript is a scripting language and programming language for the web that enables developers to build dynamic elements on the web. Over 97% of the websites in the world use client-side JavaScript. JavaScript is one of the most important scripting languages on the web. Strings in JavaScript are contained within a pair of either single quotation marks '' or double quotation marks "". Both quotes represent Strings but be sure to choose one and STICK WITH IT. If you start with a single quote, you need to end with a single quote. There are pros and cons to using both IE single quotes tend to make it easier to write HTML within Javascript as you don’t have to escape the line with a double quote. Let’s say you’re trying to use quotation marks inside a string. You’ll need to use opposite quotation marks inside and outside of JavaScript single or double quotes.
    Starting Price: Free
  • 47
    SQL

    SQL

    SQL

    SQL is a domain-specific programming language used for accessing, managing, and manipulating relational databases and relational database management systems.
    Starting Price: Free
  • 48
    C#

    C#

    Microsoft

    C# (also known as C Sharp, pronounced "See Sharp") is a modern, object-oriented, and type-safe programming language. C# enables developers to build many types of secure and robust applications that run in .NET. C# has its roots in the C family of languages and will be immediately familiar to C, C++, Java, and JavaScript programmers. This tour provides an overview of the major components of the language in C# 8 and earlier. C# is an object-oriented, component-oriented programming language. C# provides language constructs to directly support these concepts, making C# a natural language in which to create and use software components. Since its origin, C# has added features to support new workloads and emerging software design practices. At its core, C# is an object-oriented language. You define types and their behavior.
    Starting Price: Free
  • 49
    Bash

    Bash

    Bash

    Bash is a free software Unix shell and command language. It has become the default login shell for most Linux distributions. In addition to being available on Linux systems, a version of Bash is also available for Windows through the Windows Subsystem for Linux. Bash is the default user shell in Solaris 11 and was the default shell in Apple macOS from version 10.3 until the release of macOS Catalina, which changed the default shell to zsh. Despite this change, Bash remains available as an alternative shell on macOS systems. As a command processor, Bash allows users to enter commands in a text window that are then executed by the system. Bash can also read and execute commands from a file, known as a shell script. It supports a number of features commonly found in Unix shells, including wildcard matching, piping, here documents, command substitution, variables, and control structures for condition testing and iteration. Bash is compliant with the POSIX shell standards.
    Starting Price: Free
  • 50
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
    Starting Price: Free
  • Previous
  • You're on page 1
  • 2
  • Next