Skip to content

Changelog

Changelog

Follow us on X to hear about the changes first!
Cover for waitUntil is now available for Vercel FunctionsCover for waitUntil is now available for Vercel Functions

waitUntil is now available for Vercel Functions

You can now use waitUntil by importing @vercel/functions in your Vercel Functions, regardless of the framework or runtime you use.

The waitUntil() method enqueues an asynchronous task to be performed during the lifecycle of the request. It doesn't block the response, but should complete before shutting down the function.

It's used to run anything that can be done after the response is sent, such as logging, sending analytics, or updating a cache, without blocking the response from being sent.

The package is supported in Next.js (including Server Actions), Vercel CLI, and other frameworks, and can be used with the Node.js and Edge runtimes.

Learn more in the documentation.

Cover for Vercel Functions for Hobby can now run up to 60 secondsCover for Vercel Functions for Hobby can now run up to 60 seconds

Vercel Functions for Hobby can now run up to 60 seconds

Based on your feedback, Hobby customers can now run functions up to 60 seconds.

Starting today, all new deployments will be able to increase the maximum duration of functions on the free tier from 10 seconds up to 60 seconds. If you need longer than 60 seconds, you can upgrade to Pro for up to 5 minutes.

Check out our documentation to learn more.

Cover for Recommend branch based feature flag overridesCover for Recommend branch based feature flag overrides

Recommend branch based feature flag overrides

You can now recommend feature flag overrides for specific branches in order to equip your team and quickly share work in development.

The Vercel Toolbar will suggest flag overrides to team members working on the branch locally or when visiting a branch Preview Deployment. This extends the recently announced ability to view and override your application's feature flags from Vercel Toolbar, currently in beta.

As part of this change, we’ve improved the onboarding for setting up and integrating feature flags into the toolbar.

Learn more about the Vercel Toolbar and feature flags.

Cover for Access groups now generally available on Enterprise plansCover for Access groups now generally available on Enterprise plans

Access groups now generally available on Enterprise plans

Enterprise customers can now manage access to critical Vercel projects across many Vercel users easier than ever with Access Groups.

Access Groups allow team administrators to create a mapping between team members and groups of Vercel projects. Users added to an Access Group will automatically be assigned access to the Projects connected to that Access Group, and will be given the default role of that group, making onboarding easier and faster than ever for new Vercel Team members.

For customers who use a third-party Identity Provider, such as Okta, Access Groups can automatically sync with their provider, making it faster to start importing users to Vercel without creating manual user group mappings (Vercel is SCIM compliant).

For example, you can have a Marketing Engineering Access Group, which has a default project role of "Developer". When a new member is added to the Marketing Engineering group, they will automatically be assigned the Developer role, and access to all Projects assigned to that group.

This builds on our advanced access controls, like project level access controls and deployment protection. Learn more about Access Groups or contact us for a demo of our access security features.

Cover for Python 3.12 and Ruby 3.3 are now availableCover for Python 3.12 and Ruby 3.3 are now available

Python 3.12 and Ruby 3.3 are now available

Starting today, new Python Builds and Functions will use version 3.12 and new Ruby Builds and Functions will use version 3.3.

If you need to continue using Python 3.9 or Ruby 3.2, ensure you have 18.x selected for the Node.js Version in your project settings to use the older build image.

For Python 3.9, ensure your Pipfile and corresponding Pipfile.lock have python_version set to 3.9 exactly. Similarly, for Ruby 3.2, make sure ruby "~> 3.2.x" is defined in the ‌Gemfile‍​‍​‍‌‍‌.

Check out the documentation to learn more about our supported runtimes.

Cover for Accounts can now have multiple email addressesCover for Accounts can now have multiple email addresses

Accounts can now have multiple email addresses

You can now add multiple email addresses to your Vercel account.

For example, both your personal email and work email can be attached to the same Vercel account. All verified emails attached to your account can be used to login. You can mark an email as "primary" on your account, which makes it the destination for account and project notifications.

Learn more in our documentation.

Faster build times with optimized uploads

We've optimized our build process to reduce upload times by 15% on average for all customers.

For customers with large builds (10,000 outputs or more), upload times have decreased by 50%. This results in a time saving of up to 5 minutes per build for several customers.

Learn more about builds in our documentation.

Vercel Terraform Provider v1.9

The Vercel Terraform Provider allows you to create, manage and update your Vercel projects, configuration, and settings through infrastructure-as-code.

You can now control significantly more Vercel resources through Terraform:

Learn how to get started with the Terraform provider for Vercel. If you already have Terraform set up, upgrade by running:

Bash
terraform init -upgrade

Cover for Faster defaults for Vercel Function CPU and memoryCover for Faster defaults for Vercel Function CPU and memory

Faster defaults for Vercel Function CPU and memory

The default CPU for Vercel Functions will change from Basic (0.6 vCPU/1GB Memory) to Standard (1 vCPU/1.7GB Memory) for new projects created after May 6th, 2024. Existing projects will remain unchanged unless manually updated.

This change helps ensure consistent function performance and faster startup times. Depending on your function code size, this may reduce cold starts by a few hundred milliseconds.

While increasing the function CPU can increase costs for the same duration, it can also make functions execute faster. If functions execute faster, you incur less overall function duration usage. This is especially important if your function runs CPU-intensive tasks.

This change will be applied to all paid plan customers (Pro and Enterprise), no action required.

Check out our documentation to learn more.

Cover for Improved infrastructure pricing is now active for new customersCover for Improved infrastructure pricing is now active for new customers

Improved infrastructure pricing is now active for new customers

Earlier this month, we announced our improved infrastructure pricing, which is active for new customers starting today.

Billing for existing customers begins between June 25 and July 24. For more details, please reference the email with next steps sent to your account. Existing Enterprise contracts are unaffected.

Our previous combined metrics (bandwidth and functions) are now more granular, and have reduced base prices. These new metrics can be viewed and optimized from our improved Usage page.

These pricing improvements build on recent platform features to help automatically prevent runaway spend, including hard spend limits, recursion protection, improved function defaults, Attack Challenge Mode, and more.

Cover for waitUntil is now available for Vercel FunctionsCover for waitUntil is now available for Vercel Functions

You can now use waitUntil by importing @vercel/functions in your Vercel Functions, regardless of the framework or runtime you use.

The waitUntil() method enqueues an asynchronous task to be performed during the lifecycle of the request. It doesn't block the response, but should complete before shutting down the function.

It's used to run anything that can be done after the response is sent, such as logging, sending analytics, or updating a cache, without blocking the response from being sent.

The package is supported in Next.js (including Server Actions), Vercel CLI, and other frameworks, and can be used with the Node.js and Edge runtimes.

Learn more in the documentation.

Cover for Vercel Functions for Hobby can now run up to 60 secondsCover for Vercel Functions for Hobby can now run up to 60 seconds

Based on your feedback, Hobby customers can now run functions up to 60 seconds.

Starting today, all new deployments will be able to increase the maximum duration of functions on the free tier from 10 seconds up to 60 seconds. If you need longer than 60 seconds, you can upgrade to Pro for up to 5 minutes.

Check out our documentation to learn more.

Cover for Recommend branch based feature flag overridesCover for Recommend branch based feature flag overrides

You can now recommend feature flag overrides for specific branches in order to equip your team and quickly share work in development.

The Vercel Toolbar will suggest flag overrides to team members working on the branch locally or when visiting a branch Preview Deployment. This extends the recently announced ability to view and override your application's feature flags from Vercel Toolbar, currently in beta.

As part of this change, we’ve improved the onboarding for setting up and integrating feature flags into the toolbar.

Learn more about the Vercel Toolbar and feature flags.

Cover for Access groups now generally available on Enterprise plansCover for Access groups now generally available on Enterprise plans

Enterprise customers can now manage access to critical Vercel projects across many Vercel users easier than ever with Access Groups.

Access Groups allow team administrators to create a mapping between team members and groups of Vercel projects. Users added to an Access Group will automatically be assigned access to the Projects connected to that Access Group, and will be given the default role of that group, making onboarding easier and faster than ever for new Vercel Team members.

For customers who use a third-party Identity Provider, such as Okta, Access Groups can automatically sync with their provider, making it faster to start importing users to Vercel without creating manual user group mappings (Vercel is SCIM compliant).

For example, you can have a Marketing Engineering Access Group, which has a default project role of "Developer". When a new member is added to the Marketing Engineering group, they will automatically be assigned the Developer role, and access to all Projects assigned to that group.

This builds on our advanced access controls, like project level access controls and deployment protection. Learn more about Access Groups or contact us for a demo of our access security features.

Cover for Python 3.12 and Ruby 3.3 are now availableCover for Python 3.12 and Ruby 3.3 are now available

Starting today, new Python Builds and Functions will use version 3.12 and new Ruby Builds and Functions will use version 3.3.

If you need to continue using Python 3.9 or Ruby 3.2, ensure you have 18.x selected for the Node.js Version in your project settings to use the older build image.

For Python 3.9, ensure your Pipfile and corresponding Pipfile.lock have python_version set to 3.9 exactly. Similarly, for Ruby 3.2, make sure ruby "~> 3.2.x" is defined in the ‌Gemfile‍​‍​‍‌‍‌.

Check out the documentation to learn more about our supported runtimes.

Cover for Accounts can now have multiple email addressesCover for Accounts can now have multiple email addresses

You can now add multiple email addresses to your Vercel account.

For example, both your personal email and work email can be attached to the same Vercel account. All verified emails attached to your account can be used to login. You can mark an email as "primary" on your account, which makes it the destination for account and project notifications.

Learn more in our documentation.

We've optimized our build process to reduce upload times by 15% on average for all customers.

For customers with large builds (10,000 outputs or more), upload times have decreased by 50%. This results in a time saving of up to 5 minutes per build for several customers.

Learn more about builds in our documentation.

The Vercel Terraform Provider allows you to create, manage and update your Vercel projects, configuration, and settings through infrastructure-as-code.

You can now control significantly more Vercel resources through Terraform:

Learn how to get started with the Terraform provider for Vercel. If you already have Terraform set up, upgrade by running:

Bash
terraform init -upgrade

Cover for Faster defaults for Vercel Function CPU and memoryCover for Faster defaults for Vercel Function CPU and memory

The default CPU for Vercel Functions will change from Basic (0.6 vCPU/1GB Memory) to Standard (1 vCPU/1.7GB Memory) for new projects created after May 6th, 2024. Existing projects will remain unchanged unless manually updated.

This change helps ensure consistent function performance and faster startup times. Depending on your function code size, this may reduce cold starts by a few hundred milliseconds.

While increasing the function CPU can increase costs for the same duration, it can also make functions execute faster. If functions execute faster, you incur less overall function duration usage. This is especially important if your function runs CPU-intensive tasks.

This change will be applied to all paid plan customers (Pro and Enterprise), no action required.

Check out our documentation to learn more.

Cover for Improved infrastructure pricing is now active for new customersCover for Improved infrastructure pricing is now active for new customers

Earlier this month, we announced our improved infrastructure pricing, which is active for new customers starting today.

Billing for existing customers begins between June 25 and July 24. For more details, please reference the email with next steps sent to your account. Existing Enterprise contracts are unaffected.

Our previous combined metrics (bandwidth and functions) are now more granular, and have reduced base prices. These new metrics can be viewed and optimized from our improved Usage page.

These pricing improvements build on recent platform features to help automatically prevent runaway spend, including hard spend limits, recursion protection, improved function defaults, Attack Challenge Mode, and more.