Overview
Cody is an AI-powered coding assistant designed to augment the software development process. It offers a suite of features including intelligent code completion, chat-based interaction for answering technical questions, and a deep understanding of codebase context derived from Sourcegraph's code search capabilities. By using large language models (LLMs) and advanced search functionalities, Cody aims to increase developer productivity, improve the quality and consistency of code, and accelerate software development cycles.
Sourcegraph, the company behind Cody, initially launched its code search product in 2013. This earlier product, initially named Sourcegraph, was later rebranded to Code Search in 2023 when the company introduced Cody as its new AI coding assistant. Cody was first unveiled in June 2023. Following a period of development and refinement, Cody 1.0 achieved general availability on December 14, 2023. The company's decade-long experience in developing code search and intelligence tools has provided a strong foundation for Cody's capabilities in understanding and navigating complex codebases.
The open-source status of Sourcegraph's offerings has evolved over time. Initially, the Code Search product was provided under a Fair Source License before transitioning to the Apache License 2.0 in 2018. However, in 2023, most of the Code Search code shifted to a proprietary Enterprise license, although it remained publicly viewable. In contrast, Cody, the AI coding assistant, is open source under the Apache 2.0 license.
Core Differentiators
Cody distinguishes itself from other AI coding tools through several key features. A primary differentiator is its deep understanding of codebases, extending beyond the immediate file to encompass the broader project context. This allows Cody to understand the relationships between different components and provide more relevant and accurate suggestions, particularly in complex projects. This contextual awareness is powered by Sourcegraph's advanced code search engine, which helps Cody retrieve precise information from the entire codebase, leading to more idiomatic code generation and relevant answers.
Cody offers significant flexibility by supporting multiple LLMs from various providers, including Anthropic, OpenAI, Google, and Mistral. This multi-LLM support allows teams to choose the best model for specific tasks and adapt to the rapidly evolving landscape of AI models, unlike tools that might be restricted to a single provider.
Cody is also designed with enterprise needs in mind, emphasizing security and privacy. It offers options for self-hosting the underlying Sourcegraph platform and the ability to use custom API keys for LLMs, providing greater control over data and infrastructure. Features like "Smart Apply" enable code modifications across multiple files, streamlining complex refactoring tasks. Cody provides transparency by citing the sources it uses to generate responses, allowing developers to verify the context and build trust in the tool's suggestions. These differentiators position Cody as a powerful and versatile AI coding assistant suitable for both individual developers and large enterprise teams working on complex codebases.
Functionality
Interaction
Autocomplete
Cody offers robust code autocompletion capabilities. It provides both single-line and multi-line suggestions as developers type, significantly accelerating the coding process. This feature reduces the cognitive load on developers by anticipating their needs and suggesting code completions, minimizing the time spent searching for function or variable names. Cody's autocomplete functionality is powered by the latest instant LLMs, ensuring both accuracy and performance in the suggestions provided.
Agent Mode
Cody incorporates an agent mode through its "Agentic Chat" feature. The agentic chat experience is designed to proactively gather, review, and refine relevant context to provide high-quality, context-aware responses to user prompts. This minimizes the need for manual context provision by the user. The agent can autonomously use various tools, including Code Search, access to Codebase Files, execution of Terminal commands (with permission), Web Browser for live context, and OpenCtx for integrating with other platforms.
Chat-Based Assistance
Cody offers comprehensive chat-based assistance. Developers can engage in conversations with Cody to ask questions about their codebase, generate new code snippets, and request modifications to existing code. Cody employs semantic search to retrieve relevant files from the codebase and uses the context from these files to formulate accurate answers. Users can also explicitly direct Cody to specific parts of the codebase by using the @ symbol to mention files, symbols, or even remote repositories.
Terminal Access
Cody, through its Agentic Chat feature and the Smart Apply functionality, can execute terminal commands. Within the Agentic Chat mode, if the AI determines that the output of a terminal command is necessary to answer a user's prompt, it will request permission to execute the command. Cody can use the output to provide a more accurate and helpful response. Additionally, the Smart Apply feature, which allows users to insert code suggestions from the chat into their files, also supports the execution of terminal commands when relevant to the user's query.
Integration
Supported Platforms and Development Environments
Cody offers broad support for various development platforms and integrated development environments (IDEs). It is available as an extension for popular IDEs such as Visual Studio Code and the JetBrains family of IDEs, including IntelliJ IDEA, PyCharm, WebStorm, GoLand, RubyMine, PhpStorm, Rider, CLion, DataGrip, RustRover, Aqua, and DataSpell. Additionally, Cody can be accessed directly through the Sourcegraph web app.
Supported Languages and Frameworks
Cody is designed to support all major programming languages. This broad compatibility is attributed to the underlying large language models that have been trained on vast datasets of code across numerous languages. Specific examples of supported languages include popular choices like Python, JavaScript, Rust, C, and C++, as well as others such as Julia, Zig, PHP, Java, TypeScript, Go, SQL, Swift, Objective-C, Perl, Kotlin, Dart, and various shell scripting languages like Bash and PowerShell. Beyond programming languages, Cody can also assist with configuration files and documentation.
Supported LLM Providers and Models
Cody offers users a significant degree of choice by supporting a range of LLM providers and models. These include models from leading providers such as Anthropic (Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, Claude Instant), OpenAI (GPT-4o, GPT-4, GPT-3.5 Turbo, o1-preview, o3-mini), Google (Gemini 2.0 Pro, Gemini 2.0 Flash, Gemini 1.5 Pro), and Mistral (Mixtral 8x7B, Codestral). This wide selection allows developers to choose the LLM that best suits their specific needs in terms of performance, accuracy, and cost. Additionally, Cody offers experimental support for local inference using Ollama, enabling developers to run popular models like deepseek-coder:6.7b and codellama:7b in offline or air-gapped environments.
Custom LLM Support
Cody provides the flexibility for users to utilize their own API keys for certain LLM providers. This feature is primarily available for Cody Enterprise users, who can bring their own API keys for supported LLM services like Azure OpenAI and Amazon Bedrock. This allows organizations to use their existing relationships with these cloud providers and directly manage their LLM consumption costs. Individual users can also configure Cody to use their own API keys for OpenAI-compatible models within the Cody extension.
Awareness
Context from Non-Code Sources
Cody can integrate with various non-code sources, including Jira, to pull in relevant context for its assistance. This integration is facilitated through OpenCtx, an open standard for bringing contextual information about code into developer tools. By leveraging OpenCtx providers, Cody Free and Pro users can fetch and use context from platforms such as Jira tickets, Linear issues, Notion pages, Google Docs, and even Sourcegraph code search itself. This capability allows Cody to consider a broader range of project-related information beyond just the codebase, providing contextually relevant answers and suggestions.
Public Documentation
Cody can reference public documentation by allowing users to include web URLs as context in their interactions. By using the @ mention syntax followed by a web URL, users can instruct Cody to fetch and consider the content of that URL when generating responses or code suggestions. This feature is particularly useful for providing Cody with access to the latest API documentation, language specifications, or other relevant online resources, ensuring that the AI assistant has the most up-to-date information to assist developers.
Web Search
Cody, through its Agentic Chat feature, has the capability to search the web for live context and information. When Agentic Chat is enabled, Cody can autonomously decide if searching the web would be beneficial in providing a more accurate and helpful response to a user's prompt. This allows Cody to extend its knowledge beyond the indexed codebase and integrated tools to include up-to-date information available on the internet.
Fine-Tuning
Cody offers mechanisms to adapt to team-specific coding styles. Developers can provide detailed instructions and context within their prompts, explicitly asking Cody to adhere to their organization's style guide. Developers can create, save, and share custom prompts within a team can help promote consistent coding practices and ensure that Cody's suggestions align with team standards.
Codebase Context
Cody is designed to understand and adapt to a project's architecture by using its deep codebase context capabilities. Cody can understand the relationships and dependencies between different components within a codebase, even across multiple files. It uses advanced search to pull context from both local and remote repositories, gaining an understanding of APIs, symbols, and usage patterns throughout the entire project.
Leverage Multiple Files
Cody can use information from multiple files when generating code, thanks to its context-aware design. When asked to perform tasks like refactoring or bug fixing, Cody can understand the implications of changes across the entire codebase and suggest comprehensive solutions that might involve modifications to several files. The "Smart Apply" feature further facilitates this by allowing users to apply code blocks generated by Cody to multiple files in a controlled and sequential manner. For Cody Enterprise users, the ability to retrieve context from multiple repositories enables Cody to generate code that integrates with components and APIs in different parts of a large-scale project.
Privacy and Security
SOC 2 Compliance
Cody Enterprise is SOC 2 Type II compliant. Additionally, the underlying Sourcegraph Cloud platform, which can host Cody, has also achieved SOC 2 Type II compliance.
Code Retention Policy
Cody Enterprise offers a zero-retention policy for code and prompts when using Sourcegraph's provided LLMs. This means that the LLMs used by Cody Enterprise do not retain data from user requests beyond the time required to generate the output. Sourcegraph Partner LLMs also adhere to this "Zero Retention" policy, ensuring that input and output data, including embeddings, are not stored long-term. For enterprise customers who use their own LLM API key for self-hosted deployments, Sourcegraph does not collect or have access to User Prompts or Responses.
Self-Hosting and VPCs
Cody Enterprise provides the capability to self-host the entire Sourcegraph platform, allowing Cody to operate in conjunction with self-hosted code repositories within an organization's own data centers. Additionally, enterprises can leverage the "bring-your-own-key" (BYOK) option to deploy and use LLMs within a secure and private environment.
Use of Code for Training
Sourcegraph explicitly states that it does not use customer code for training the models used by Cody Enterprise or Cody Pro teams. Agreements are in place with their model service providers to ensure that user code is never retained or used for training purposes. For users on the Free and Pro tiers, Sourcegraph may use their data to fine-tune the model they are accessing, but only with explicit permission from the user.
Pricing
Team Plans
Sourcegraph Cody offers several plans that can be suitable for teams, with varying features and pricing structures. The Cody Pro plan, while designed for individuals or small teams, is priced at $9 per user per month. For growing organizations, the Enterprise Starter plan is available at $19 per user per month and supports up to 50 developers. The Enterprise plan, which offers advanced features, security, scalability, and flexibility, is priced at $59 per user per month for teams with 25 or more developers.
Individual Plans
Sourcegraph Cody offers an individual plan called Cody Pro, which is priced at $9 per month. This plan provides unlimited autocompletion suggestions and increased limits for chat and prompts compared to the free tier. Additionally, Sourcegraph provides a free plan, Cody Free, which offers basic features for individual hobbyists or developers just getting started with AI coding assistants.
Free Trial
Sourcegraph Cody offers a perpetually free plan, Cody Free, which is available for individual users indefinitely. For organizations considering the Enterprise plan, Sourcegraph offers a free 30-day trial.
Impact
Developer Productivity
Cody is designed to significantly enhance developer productivity across various aspects of the software development process. By automating repetitive coding tasks, providing intelligent real-time assistance, and improving the overall quality of code, Cody helps developers save time and focus on more complex and innovative aspects of their work. Sourcegraph estimates that developers at Coinbase using AI code assistants like Cody save approximately 5-6 hours per week and accomplish their tasks twice as fast compared to working without such tools.. Engineers at Qualtrics reported a 28% reduction in the frequency of leaving their IDE to search for information on the web and a 25% faster understanding of code when using Cody.
Code Quality and Accuracy
By suggesting consistent coding styles and best practices, Cody can help enforce coding standards and minimize the introduction of bugs. It can also assist in generating unit tests, identifying potential code smells, and suggesting optimizations, leading to more maintainable code. Cody's deep context awareness can provide more accurate and relevant suggestions that align with a project's overall structure and conventions.
SDLC Phases Impacted
Cody's impact extends across multiple phases of the software development life cycle (SDLC). Its primary impact is on the coding phase, where it offers code generation, autocompletion, and inline editing capabilities. Cody also plays a role in the testing phase by generating unit tests and aiding in debugging. Additionally, it can assist with code refactoring, documentation, and even the onboarding of new team members by providing explanations of complex codebases.
Suggestion Accuracy and Acceptance
Cody has demonstrated a notable level of accuracy in its code suggestions. In October 2023, Cody achieved an initial completion acceptance rate of 30%, indicating that nearly one-third of the suggestions were accepted by developers. Continuous improvements to Cody's autocomplete functionality have led to a significant 58% increase in the number of accepted characters per user, suggesting that the quality and relevance of suggestions are improving.
Summary and Recommendations
Sourcegraph Cody’s core differentiators, particularly its deep codebase understanding and flexibility in supporting multiple LLMs, position it as a strong contender for engineering teams. The platform's functionality, including intelligent code completion, versatile chat-based assistance, and the ability to integrate with various development environments and non-code sources, offers significant potential for improving developer productivity across the software development lifecycle.
The strong emphasis on privacy and security, especially within the Cody Enterprise offering with its SOC 2 compliance, zero-retention policy, and self-hosting capabilities, addresses important concerns for organizations handling sensitive code and data.
While research suggests that AI coding assistants can boost productivity and improve code quality, it also highlights the importance of careful usage and code review to mitigate the risk of AI-introduced errors. Organizations implementing Cody should establish best practices for its use, including guidelines for prompt engineering and testing of AI-generated code.