Google debuts Cloud Run jobs for containerized, scripted tasks
During a developer keynote at Google I/O 2022, Google unveiled Cloud Run jobs, an extension of Google Cloud's service for developing and deploying containerized apps using languages including Go, Python and Java. Cloud Run jobs are designed for containers that run to completion and don't serve requests, such as data processing and administrative jobs, and when multiple copies of a container must run in parallel.
Cloud Run launched in 2019, adding to Google Cloud's then-rapidly-growing serverless compute stack. As the demand for serverless climbs, it would appear that expansions like Cloud Run jobs are an attempt to beat back against rivals like Azure and Amazon Web Services.
Available in preview starting today, Cloud Run jobs can be used to run a script to perform database migrations or other operational tasks, like sending invoices every month. Relative to other platforms that support long-running jobs, Cloud Run jobs start quickly after creation, Google claims, with simple containers starting in as little as 10 seconds.
To use Cloud Run jobs, developers create a job, which encapsulates all the configuration needed to run the job including the container image, region, environment variables. Then, they set up the job to run on a schedule or manually run the job, creating a new execution of the job.
During the preview, Cloud Run jobs supports up to 50 executions from the same or different jobs concurrently per project per region. Users can view existing jobs, start executions and monitor execution status from Cloud Console's Cloud Run Jobs page; Cloud Console doesn't currently support creating new jobs.
Cloud Run jobs arrives alongside an updated Firebase, Google's popular back-end-as-a-service platform, and AlloyDB, a new fully managed PostgreSQL database service. Arguably the more interesting of the two, AlloyDB features -- as my colleague Frederic Lardinois writes -- a custom machine learning-based caching service to learn a customer’s access patterns and then convert Postgres’ row format into an in-memory columnar format that can be analyzed significantly faster.