Developer
Aayush Kapoor
83492835+aayush-kapoor@users.noreply.github.com
Performance
Key patterns and highlights from this developer's activity.
Breakdown of growth, maintenance, and fixes effort over time.
Bugs introduced vs. fixed over time.
Reclassifies engineering effort based on bug attribution. Commits that introduced bugs are retrospectively counted as poor investments.
Investment Quality reclassifies engineering effort based on bug attribution data. Commits identified as buggy origins (those that introduced bugs later fixed by someone) have their grow and maintenance time moved into the Wasted Time category. Their waste (fix commits) remains counted as productive. All other commits retain their standard classification: grow is productive, maintenance is maintenance, and waste (fixes) is productive.
The standard model classifies commits as Growth, Maintenance, or Fixes. Investment Quality adds a quality lens: a commit that introduced a bug is retrospectively counted as a poor investment — the engineering time spent on it was wasted because it ultimately required additional fix work. Fix commits (Fixes in the standard model) are reframed as productive, because fixing bugs is valuable work.
Currently computed client-side from commit and bug attribution data. Ideal server-side endpoint:
POST /v1/organizations/{orgId}/investment-quality
Content-Type: application/json
Request:
{
"startTime": "2025-01-01T00:00:00Z",
"endTime": "2025-12-31T23:59:59Z",
"bucketSize": "BUCKET_SIZE_MONTH",
"groupBy": ["repository_id" | "deliverer_email"]
}
Response:
{
"productivePct": 74,
"maintenancePct": 18,
"wastedPct": 8,
"buckets": [
{
"bucketStart": "2025-01-01T00:00:00Z",
"productive": 4.2,
"maintenance": 1.8,
"wasted": 0.6
}
]
}Latest analyzed commits from this developer.
| Hash | Message | Date | Files |
|---|
Commit activity distribution by hour and day of week. Shows when this developer is most active.
Developers who frequently work on the same files and symbols. Higher score means stronger code collaboration.
| Effort |
|---|
| 9b47dead | This commit **refactors** the **telemetry settings** by removing the direct import of the OpenTelemetry `Tracer` API from `TelemetrySettings`. This change is part of an ongoing effort to **decouple OpenTelemetry from core functions and packages**, improving modularity. Instead of direct import, the tracer is now passed through the `integrations()` method within `TelemetrySettings`. This **maintenance** work ensures a cleaner separation of concerns and simplifies the core codebase's dependency on OTel. | Mar 31 | 2 | maint |
| c4f1bebc | This commit **refactors** the **telemetry test suite** within the `ai` package to improve organization and align with architectural goals. It consolidates all OpenTelemetry integration tests, previously scattered across various files for functions like `generateText` and `streamText`, into a new dedicated `open-telemetry-integration.test.ts` file. This **maintenance** effort also adds new tests for `rerank`, `embed`, and `embedMany` functions. The change supports the ongoing initiative to **decouple OpenTelemetry** from the core `ai` package, enhancing the maintainability and clarity of the telemetry testing infrastructure. | Mar 31 | 11 | maint |
| b56301cf | This commit **refactors** the **AI SDK's `generateObject` and `streamObject` functions** to **decouple OpenTelemetry (OTel) dependencies** from their core logic. OTel-specific tracing for structured outputs is now handled within the dedicated `OpenTelemetryIntegration`, which has been extended with new `onObjectStepStart` and `onObjectStepFinish` callbacks. This **refactoring** improves modularity and maintainability by centralizing telemetry concerns within the `packages/ai/src/telemetry` module. The change makes the core AI functions cleaner and more adaptable to different observability backends, providing more granular tracing for structured object generation. | Mar 31 | 11 | maint |
| bc67b4f6 | This commit introduces **experimental lifecycle callbacks** for the **`generateObject` and `streamObject` functions** within the **`@vercel/ai` package's structured output module**. It adds new event types such as `ObjectOnStartEvent`, `ObjectOnStepStartEvent`, `ObjectOnStepFinishEvent`, and `ObjectOnFinishEvent`, enabling developers to observe and react to various stages of the object generation process. This **new capability** provides enhanced observability and control over AI structured outputs, allowing for more granular interaction and debugging. Comprehensive tests have been added to validate the correct order and properties of these new events, ensuring reliable integration for downstream consumers. | Mar 30 | 8 | maint |
| f4cfccd1 | This commit **refactors** the **`rerank` function** within the **`@vercel/ai` package** to **decouple its OpenTelemetry integration**. It introduces generic `RerankStartEvent` and `RerankFinishEvent` types, expanding the `onStart` and `onFinish` callbacks in the `telemetry` module to handle these new events. All OpenTelemetry-specific logic for reranking is now moved from `rerank.ts` into `open-telemetry-integration.ts`, making the core `rerank` function agnostic to the telemetry provider. This **refactoring** significantly improves the **modularity and maintainability** of the `rerank` subsystem, allowing for easier testing and future integration with alternative telemetry solutions. | Mar 24 | 8 | grow |
| 1f509d4b | This commit **fixes** and **refactors** the `kind` parameter for custom content parts across the AI SDK, enforcing a strict `${string}.${string}` template that mandates a dot separator. This standardization impacts the **`@vercel/ai`**, **`@vercel/openai`**, and **`@vercel/provider`** packages, specifically within their `generate-text`, `prompt`, `ui-message-stream`, `ui`, and `language-model` functionalities. The **refactoring** ensures consistent data formatting, preventing potential issues with custom part identification. This change includes comprehensive **documentation updates** and **test adjustments** to reflect the new required format, ensuring all existing examples and internal processes adhere to the updated standard. | Mar 24 | 27 | maint |
| 118b9532 | This commit **refactors** the **`@ai` package's embedding and telemetry subsystems** to **decouple OpenTelemetry (OTel) integration from core embedding functions**. Direct OTel span creation has been removed from `embed.ts` and `embed-many.ts`, replaced by a new event-driven system. This introduces `EmbedStartEvent` and `EmbedFinishEvent` types, and updates the global telemetry integration to conditionally create OTel spans for embedding operations. This **new capability** provides a more modular and configurable approach to tracing embedding calls, improving the extensibility of the telemetry system. | Mar 23 | 10 | grow |
| caf1b6f9 | This commit introduces **experimental `onStart` and `onFinish` callbacks** for the **`rerank` function** within the **`ai` package**. This **new capability** provides a generic, type-safe mechanism to emit event data at the start and end of reranking operations, mirroring similar functionality in `generateText` and `streamText`. The primary goal is to facilitate the **decoupling of OpenTelemetry** and other observability integrations from the core `ai` package. This feature includes new interfaces for event types and updates to the `rerank` function's implementation, documentation, and tests. | Mar 20 | 8 | maint |
| 877bf12d | This commit introduces a significant **refactoring** within the **AI package** to simplify the representation of model attributes. It **flattens the `model` object** in event payloads and function parameters, directly exposing `provider` and `modelId` instead of nesting them. This change impacts **AI eventing** across `embed` and `generate-text` functionalities, requiring updates to event interfaces, core functions like `generateText` and `embedMany`, and **telemetry integrations** such as `OpenTelemetryIntegration`. Additionally, the core event definition file was renamed from `callback-events.ts` to `core-events.ts`, enhancing clarity and reducing data convolution for a more streamlined developer experience. | Mar 19 | 27 | maint |
| 17bed392 | This commit **fixes a pre-commit hook failure** by **narrowing the `lint-staged` glob pattern** within the `package.json` configuration. Previously, the `*` glob would attempt to run `ultracite fix` on all staged files, causing an error when only non-JavaScript/TypeScript files (such as `.md` changesets) were staged, as `ultracite` expects JS/TS targets. The updated configuration now specifically targets `*.js`, `*.jsx`, `*.ts`, and `*.tsx` files, ensuring `ultracite fix` only processes relevant code. This **prevents unnecessary build failures** during the commit process and significantly improves the **developer experience** by allowing smooth commits of various file types. | Mar 19 | 1 | – |
| d9a1e9a4 | This commit introduces **server-side compaction** support for the **OpenAI provider**, enabling more efficient context management within the Responses API. It implements the necessary API specification updates to include a `compaction` block in `providerMetadata`, carrying an `encrypted content id` for context management. This **new capability** primarily affects the `packages/openai` module by updating its language model implementation, conversion logic, and type definitions to handle compaction items. Extensive **examples** for `generate-text`, `stream-text`, and `useChat` are included, along with updated documentation, to demonstrate how users can leverage this feature. The change allows for improved performance and reduced token usage in long conversations by offloading context management to the OpenAI server. | Mar 19 | 25 | grow |
| ff9ce30b | This commit introduces **experimental `onStart` and `onFinish` callbacks** for the **`ai` package's `embed` and `embedMany` functions**, providing a **new capability** for enhanced observability. These callbacks allow external systems to subscribe to and receive event data at the start and completion of embedding operations, crucial for future **OpenTelemetry integration** and decoupling. The change includes new type-safe interfaces like `EmbedOnStartEvent` and `EmbedOnFinishEvent`, comprehensive test cases for telemetry and error handling, and updated documentation for `embed` and `embedMany` to reflect these new event listeners. This work is a significant step towards **decoupling telemetry concerns** from the core `ai` package, aligning `embed` functions with similar event emission patterns already present in `generateText` and `streamText`. | Mar 18 | 13 | maint |
| 776b6170 | This commit introduces a **new `custom` content type** across the **AI SDK**, enabling model providers to return arbitrary, non-standard content that doesn't fit existing categories. This **feature addition** updates core functions like `generateText` and `streamText`, prompt conversion, and UI message processing to correctly recognize, handle, stream, and display these new `CustomPart` objects. It involves extensive changes to **AI SDK core**, **UI message handling**, and **provider utility types**, ensuring the `custom` content flows seamlessly from the model to the user interface. This significantly enhances the **extensibility of the AI SDK**, allowing for more flexible integration with diverse AI model outputs, supported by comprehensive documentation and tests. While `xai` currently issues a warning for `custom` parts, the rest of the system is prepared for this new content. | Mar 18 | 39 | grow |
| 23fa1610 | This commit **fixes a security vulnerability** within the **`mcp` package** by changing the default `redirect` option for both `HttpMCPTransport` and `SseMCPTransport` from `'follow'` to `'error'`. This **breaking change** enhances security by preventing potential Server-Side Request Forgery (SSRF) by ensuring redirects are not automatically followed by default. The update includes comprehensive **documentation updates** for `createMCPClient` and a new entry in the migration guide, along with corresponding **test adjustments** to reflect the new secure default behavior. Users relying on the previous default will now need to explicitly configure `redirect: 'follow'` if that behavior is still desired. | Mar 17 | 9 | maint |
| 156cdf06 | This commit introduces **support for OpenAI's new `toolSearch` tool**, a **new capability** that allows models to perform search operations. It involves significant updates to the **`openai` package**, including new API types, schemas, and logic within `OpenAIResponsesLanguageModel` to handle `toolSearch` calls and outputs, as well as proper tool conversion. Comprehensive **examples** have been added for both hosted and client-executed `toolSearch` across `generateText` and `streamText` modes, alongside **documentation** and UI demonstrations. This feature enhances the model's ability to interact with external search functionalities, providing more dynamic and informed responses. | Mar 17 | 28 | maint |
| ebb02ea6 | This commit introduces **Anthropic's tool search capabilities**, specifically `toolSearchRegex_20251119` and `toolSearchBm25_20251119`, to the **Google Vertex AI provider**. This **new capability** allows users to leverage advanced information retrieval tools when interacting with Anthropic models via Vertex AI, after resolving a compatibility issue related to beta headers. The `vertexAnthropicTools` object in `packages/google-vertex/src/anthropic/google-vertex-anthropic-provider.ts` now exposes these tools. New examples have been added to demonstrate both `generateText` and `streamText` usage, significantly enhancing the model's ability to perform sophisticated tool-based searches. | Mar 17 | 8 | grow |
| 21d1ee39 | This commit **fixes a compatibility issue** within the **Anthropic integration** by preventing the `advanced-tool-use-2025-11-20` beta header from being sent with tool search requests. This header is no longer required by Anthropic's API for tool search and was causing errors with other providers, such as Google Vertex. The change ensures that **Anthropic tool search functionality** works correctly across different environments, improving the robustness of the AI SDK. This is a **bug fix** and **maintenance update** to align with current API specifications, specifically impacting the `anthropic-prepare-tools` module. | Mar 16 | 4 | maint |
| f05a40d5 | This commit introduces a **fix** and **enhancement** to gracefully handle the `strict: true` option for tools, specifically when used with the **Google Vertex Anthropic provider**. Previously, this configuration would lead to a runtime error, but now, the system will **emit a warning** to inform users about the incompatibility. This is achieved by adding a `supportsStrictTools` property to the **Anthropic** and **Google Vertex Anthropic providers**, with the latter explicitly disabling strict tool support. The change also updates the **Amazon Bedrock provider** and relevant **documentation** to reflect this behavior, improving the robustness and user guidance for tool usage across different AI models. | Mar 16 | 9 | grow |
| f6d21276 | This commit performs a **chore** to **update the AssemblyAI integration** within the project, migrating it from using v3 API specifications to the **latest v4 specifications**. Specifically, the **AssemblyAI package**'s `assemblyai-provider.ts` and `assemblyai-transcription-model.ts` files have been updated to implement the new v4 types and explicitly set their specification versions. This **maintenance** task ensures the AssemblyAI functionality remains current and compatible with the latest API, affecting how transcription models and providers interact with the service. | Mar 16 | 3 | maint |
| 8f3e1da1 | This commit **upgrades the `openai-compatible` package and its dependent provider integrations to utilize the V4 specifications**. This **refactoring** effort updates type imports, interface extensions, and model implementations across **language, embedding, image, and reranking models** for providers such as `baseten`, `cerebras`, `deepinfra`, `fireworks`, `moonshotai`, `togetherai`, and `vercel`. The change ensures these packages are aligned with the latest API standards, enhancing consistency and future compatibility within the ecosystem. | Mar 16 | 39 | maint |
This commit **refactors** the **telemetry settings** by removing the direct import of the OpenTelemetry `Tracer` API from `TelemetrySettings`. This change is part of an ongoing effort to **decouple OpenTelemetry from core functions and packages**, improving modularity. Instead of direct import, the tracer is now passed through the `integrations()` method within `TelemetrySettings`. This **maintenance** work ensures a cleaner separation of concerns and simplifies the core codebase's dependency on OTel.
This commit **refactors** the **telemetry test suite** within the `ai` package to improve organization and align with architectural goals. It consolidates all OpenTelemetry integration tests, previously scattered across various files for functions like `generateText` and `streamText`, into a new dedicated `open-telemetry-integration.test.ts` file. This **maintenance** effort also adds new tests for `rerank`, `embed`, and `embedMany` functions. The change supports the ongoing initiative to **decouple OpenTelemetry** from the core `ai` package, enhancing the maintainability and clarity of the telemetry testing infrastructure.
This commit **refactors** the **AI SDK's `generateObject` and `streamObject` functions** to **decouple OpenTelemetry (OTel) dependencies** from their core logic. OTel-specific tracing for structured outputs is now handled within the dedicated `OpenTelemetryIntegration`, which has been extended with new `onObjectStepStart` and `onObjectStepFinish` callbacks. This **refactoring** improves modularity and maintainability by centralizing telemetry concerns within the `packages/ai/src/telemetry` module. The change makes the core AI functions cleaner and more adaptable to different observability backends, providing more granular tracing for structured object generation.
This commit introduces **experimental lifecycle callbacks** for the **`generateObject` and `streamObject` functions** within the **`@vercel/ai` package's structured output module**. It adds new event types such as `ObjectOnStartEvent`, `ObjectOnStepStartEvent`, `ObjectOnStepFinishEvent`, and `ObjectOnFinishEvent`, enabling developers to observe and react to various stages of the object generation process. This **new capability** provides enhanced observability and control over AI structured outputs, allowing for more granular interaction and debugging. Comprehensive tests have been added to validate the correct order and properties of these new events, ensuring reliable integration for downstream consumers.
This commit **refactors** the **`rerank` function** within the **`@vercel/ai` package** to **decouple its OpenTelemetry integration**. It introduces generic `RerankStartEvent` and `RerankFinishEvent` types, expanding the `onStart` and `onFinish` callbacks in the `telemetry` module to handle these new events. All OpenTelemetry-specific logic for reranking is now moved from `rerank.ts` into `open-telemetry-integration.ts`, making the core `rerank` function agnostic to the telemetry provider. This **refactoring** significantly improves the **modularity and maintainability** of the `rerank` subsystem, allowing for easier testing and future integration with alternative telemetry solutions.
This commit **fixes** and **refactors** the `kind` parameter for custom content parts across the AI SDK, enforcing a strict `${string}.${string}` template that mandates a dot separator. This standardization impacts the **`@vercel/ai`**, **`@vercel/openai`**, and **`@vercel/provider`** packages, specifically within their `generate-text`, `prompt`, `ui-message-stream`, `ui`, and `language-model` functionalities. The **refactoring** ensures consistent data formatting, preventing potential issues with custom part identification. This change includes comprehensive **documentation updates** and **test adjustments** to reflect the new required format, ensuring all existing examples and internal processes adhere to the updated standard.
This commit **refactors** the **`@ai` package's embedding and telemetry subsystems** to **decouple OpenTelemetry (OTel) integration from core embedding functions**. Direct OTel span creation has been removed from `embed.ts` and `embed-many.ts`, replaced by a new event-driven system. This introduces `EmbedStartEvent` and `EmbedFinishEvent` types, and updates the global telemetry integration to conditionally create OTel spans for embedding operations. This **new capability** provides a more modular and configurable approach to tracing embedding calls, improving the extensibility of the telemetry system.
This commit introduces **experimental `onStart` and `onFinish` callbacks** for the **`rerank` function** within the **`ai` package**. This **new capability** provides a generic, type-safe mechanism to emit event data at the start and end of reranking operations, mirroring similar functionality in `generateText` and `streamText`. The primary goal is to facilitate the **decoupling of OpenTelemetry** and other observability integrations from the core `ai` package. This feature includes new interfaces for event types and updates to the `rerank` function's implementation, documentation, and tests.
This commit introduces a significant **refactoring** within the **AI package** to simplify the representation of model attributes. It **flattens the `model` object** in event payloads and function parameters, directly exposing `provider` and `modelId` instead of nesting them. This change impacts **AI eventing** across `embed` and `generate-text` functionalities, requiring updates to event interfaces, core functions like `generateText` and `embedMany`, and **telemetry integrations** such as `OpenTelemetryIntegration`. Additionally, the core event definition file was renamed from `callback-events.ts` to `core-events.ts`, enhancing clarity and reducing data convolution for a more streamlined developer experience.
This commit **fixes a pre-commit hook failure** by **narrowing the `lint-staged` glob pattern** within the `package.json` configuration. Previously, the `*` glob would attempt to run `ultracite fix` on all staged files, causing an error when only non-JavaScript/TypeScript files (such as `.md` changesets) were staged, as `ultracite` expects JS/TS targets. The updated configuration now specifically targets `*.js`, `*.jsx`, `*.ts`, and `*.tsx` files, ensuring `ultracite fix` only processes relevant code. This **prevents unnecessary build failures** during the commit process and significantly improves the **developer experience** by allowing smooth commits of various file types.
This commit introduces **server-side compaction** support for the **OpenAI provider**, enabling more efficient context management within the Responses API. It implements the necessary API specification updates to include a `compaction` block in `providerMetadata`, carrying an `encrypted content id` for context management. This **new capability** primarily affects the `packages/openai` module by updating its language model implementation, conversion logic, and type definitions to handle compaction items. Extensive **examples** for `generate-text`, `stream-text`, and `useChat` are included, along with updated documentation, to demonstrate how users can leverage this feature. The change allows for improved performance and reduced token usage in long conversations by offloading context management to the OpenAI server.
This commit introduces **experimental `onStart` and `onFinish` callbacks** for the **`ai` package's `embed` and `embedMany` functions**, providing a **new capability** for enhanced observability. These callbacks allow external systems to subscribe to and receive event data at the start and completion of embedding operations, crucial for future **OpenTelemetry integration** and decoupling. The change includes new type-safe interfaces like `EmbedOnStartEvent` and `EmbedOnFinishEvent`, comprehensive test cases for telemetry and error handling, and updated documentation for `embed` and `embedMany` to reflect these new event listeners. This work is a significant step towards **decoupling telemetry concerns** from the core `ai` package, aligning `embed` functions with similar event emission patterns already present in `generateText` and `streamText`.
This commit introduces a **new `custom` content type** across the **AI SDK**, enabling model providers to return arbitrary, non-standard content that doesn't fit existing categories. This **feature addition** updates core functions like `generateText` and `streamText`, prompt conversion, and UI message processing to correctly recognize, handle, stream, and display these new `CustomPart` objects. It involves extensive changes to **AI SDK core**, **UI message handling**, and **provider utility types**, ensuring the `custom` content flows seamlessly from the model to the user interface. This significantly enhances the **extensibility of the AI SDK**, allowing for more flexible integration with diverse AI model outputs, supported by comprehensive documentation and tests. While `xai` currently issues a warning for `custom` parts, the rest of the system is prepared for this new content.
This commit **fixes a security vulnerability** within the **`mcp` package** by changing the default `redirect` option for both `HttpMCPTransport` and `SseMCPTransport` from `'follow'` to `'error'`. This **breaking change** enhances security by preventing potential Server-Side Request Forgery (SSRF) by ensuring redirects are not automatically followed by default. The update includes comprehensive **documentation updates** for `createMCPClient` and a new entry in the migration guide, along with corresponding **test adjustments** to reflect the new secure default behavior. Users relying on the previous default will now need to explicitly configure `redirect: 'follow'` if that behavior is still desired.
This commit introduces **support for OpenAI's new `toolSearch` tool**, a **new capability** that allows models to perform search operations. It involves significant updates to the **`openai` package**, including new API types, schemas, and logic within `OpenAIResponsesLanguageModel` to handle `toolSearch` calls and outputs, as well as proper tool conversion. Comprehensive **examples** have been added for both hosted and client-executed `toolSearch` across `generateText` and `streamText` modes, alongside **documentation** and UI demonstrations. This feature enhances the model's ability to interact with external search functionalities, providing more dynamic and informed responses.
This commit introduces **Anthropic's tool search capabilities**, specifically `toolSearchRegex_20251119` and `toolSearchBm25_20251119`, to the **Google Vertex AI provider**. This **new capability** allows users to leverage advanced information retrieval tools when interacting with Anthropic models via Vertex AI, after resolving a compatibility issue related to beta headers. The `vertexAnthropicTools` object in `packages/google-vertex/src/anthropic/google-vertex-anthropic-provider.ts` now exposes these tools. New examples have been added to demonstrate both `generateText` and `streamText` usage, significantly enhancing the model's ability to perform sophisticated tool-based searches.
This commit **fixes a compatibility issue** within the **Anthropic integration** by preventing the `advanced-tool-use-2025-11-20` beta header from being sent with tool search requests. This header is no longer required by Anthropic's API for tool search and was causing errors with other providers, such as Google Vertex. The change ensures that **Anthropic tool search functionality** works correctly across different environments, improving the robustness of the AI SDK. This is a **bug fix** and **maintenance update** to align with current API specifications, specifically impacting the `anthropic-prepare-tools` module.
This commit introduces a **fix** and **enhancement** to gracefully handle the `strict: true` option for tools, specifically when used with the **Google Vertex Anthropic provider**. Previously, this configuration would lead to a runtime error, but now, the system will **emit a warning** to inform users about the incompatibility. This is achieved by adding a `supportsStrictTools` property to the **Anthropic** and **Google Vertex Anthropic providers**, with the latter explicitly disabling strict tool support. The change also updates the **Amazon Bedrock provider** and relevant **documentation** to reflect this behavior, improving the robustness and user guidance for tool usage across different AI models.
This commit performs a **chore** to **update the AssemblyAI integration** within the project, migrating it from using v3 API specifications to the **latest v4 specifications**. Specifically, the **AssemblyAI package**'s `assemblyai-provider.ts` and `assemblyai-transcription-model.ts` files have been updated to implement the new v4 types and explicitly set their specification versions. This **maintenance** task ensures the AssemblyAI functionality remains current and compatible with the latest API, affecting how transcription models and providers interact with the service.
This commit **upgrades the `openai-compatible` package and its dependent provider integrations to utilize the V4 specifications**. This **refactoring** effort updates type imports, interface extensions, and model implementations across **language, embedding, image, and reranking models** for providers such as `baseten`, `cerebras`, `deepinfra`, `fireworks`, `moonshotai`, `togetherai`, and `vercel`. The change ensures these packages are aligned with the latest API standards, enhancing consistency and future compatibility within the ecosystem.