This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Reference

Generated documentation for Weights & Biases APIs

Release notes

Learn about W&B releases, including new features, performance improvements, and bug fixes.

Release policies and processes

Learn more about W&B releases, including frequency, support policies, and end of life.

Python Library

Train, fine-tune, and manage models from experimentation to production.

Command Line Interface

Log in, run jobs, execute sweeps, and more using shell commands.

Javascript Library

A beta JavaScript/TypeScript client to track metrics from your Node server.

Query Panels

A beta query language to select and aggregate data.

1 - Release Notes

This section includes release notes for supported W&B Server releases. For releases that are no longer supported, refer to Archived releases.

1.1 - 0.69.x

May 28, 2025

W&B 0.69 focuses on making the workspace more intuitive, collaborative, and efficient. Clearer visualizations and faster artifact downloads streamline how you interact with your data, so you can gain and share insights more quickly. Updates to Weave improve team workflows and evaluation tracking. A range of quality-of-life fixes tidy up the overall user experience.

This release also marks the end of life for v0.54 and older, which are now officially unsupported.

Support and end of life

  • W&B Server v0.54 and below have reached end of life as of May 27, 2025.
  • W&B Server v0.56 is scheduled to reach end of life in July 2025

A W&B Server release is supported for 12 months from its initial release date. As a reminder, customers using Self-managed are responsible to upgrade to a supported release in time to maintain support.

Refer to Release policies and processes. For assistance or questions, contact support.

Upgrading

To upgrade to W&B v0.69.x, you must use v0.31.4+ of the operator-wandb Helm chart. Otherwise, after the upgrade, the weave-cache-clear container can fail to start. Ensure that your deployment uses these values:

chart:
  url: https://charts.wandb.ai
  name: operator-wandb
  version: 0.31.4

If you have questions or are experiencing issues with an upgrade, contact support.

Features

  • You can now set a custom display name for a run directly in the workspace. Customized run names show up in all plots and tables but only in your workspace, with no impact on your teammates’ views. This provides a clearer and cleaner view in your workspace, with no more labels like *...v6-final-restart...* in every legend and plot.
  • When filtering or grouping runs, colors can sometimes overlap and become indistinct. The run selector’s new Randomize Colors option reassigns random colors from the default palette to your current run selection or groups, helping to make the colors more distinguishable.
  • In line plots, you can now use Cmd+Click on a line to open a single-run view in a new tab.
  • Video media panels now provide more playback controls to play, pause, seek, view full screen, and adjust playback speed.
  • Settings for all types of media panels have been reorganized and improved.
  • You can now customize the point and background colors for point cloud panels.
  • Team-level and organization-level service accounts can now interact with Registry.
  • Improved Exponentially-weighted Moving Average (EMA) smoothing provides more reliable smoothed lines when operating on complete, unbinned data. In most cases, smoothing is handled at the back end for improved performance. This feature was in private preview in v0.68.x.

Private preview

Private preview features are available by invitation only. To request enrollment in a private preview, contact support or your AISE.

  • You can now color all of your runs based on a secondary metric, such as loss or custom efficiency metrics. This creates a clear gradient color scale across your runs in all plots, so you can spot patterns faster. Watch a video demo.
  • Personal workspace templates allow you to save core line plot settings and automatically reapply them in new views. These settings include x-axis key, smoothing algorithm, smoothing factor, max number of lines, whether to use the run selector’s grouping, and which aggregation to apply.

Weave

  • Saved views simplify team collaboration and allow you to persist filter and column settings.
  • PDFs and generic files are now supported.
  • The new EvaluationLogger API provides flexible imperative-style evaluation logging.
  • You can now import human annotations into Weave datasets
  • Playground now supports saved configurations and prompts.
  • Decorators are now supported in TypeScript.
  • Added support for tracing generator functions.
  • The new dataset.add_rows helper improves the efficiency of appending to an existing dataset.
  • To help you understand your usage, trace and object sizes are now shown through the UI.

Performance

  • With wandb SDK v0.19.11, artifacts now download 3-5x faster on average. For example, an artifact that previously downloaded at around 100 MB/sec may now download at 450 MB/sec or faster. Actual download speeds vary based on factors such as your network and storage infrastructure.
  • Improved caching on Project and User Settings pages.

Fixes

  • Improved the startup process for the weave-cache-clear container to ensure compatibility with Python virtual environments.
  • Added options for denser display of console logs.
  • Workspace loading screens are now more informative.
  • When adding a panel from a workspace to a report, the current project’s reports are now shown first in the destination report list.
  • Fixed many cases where y-axes would over-round to a degree that caused duplicate values to display.
  • Fixed confusing behavior when entering invalid smoothing parameters.
  • Removed the Partial Media warning from media panels. This does not change the behavior of the media panels.
  • When adding a run filter based on tags, the filter is now selected by default, as when filtering by other fields.
  • Removed the green bell icon that could appear on active runs in the run selector.
  • Removed the System page for individual runs.
  • The project description field now respects new lines.
  • Fixed URLs for legacy model registry collections.
  • Fixed a bug where the Netron viewer did not expand to fill all available space on the page.
  • When you click Delete on a project, the project name now displays in the confirmation modal.

1.2 - 0.68.x

April 29, 2025

W&B Server v0.68 includes enhancements to various types of panels and visualizations, security improvements for Registry, Weave, and service accounts, performance improvements when forking and rewinding runs, and more.

The latest patch is v0.68.2.

Refer to Patches.

Features

  • Release notes for W&B Server are now published in the W&B documentation in addition to on GitHub. Subscribe using RSS.
  • Registry admins can define and assign protected aliases to represent key stages of your development pipeline. A protected alias can be assigned only by a registry admin. W&B blocks other users from adding or removing protected aliases from versions in a registry using the API or UI.
  • You can now filter console logs based on a run’s x_label value. During distributed training, this optional parameter tracks the node that logged the run.
  • You can now move runs between Groups, one by one or in bulk. Also, you can now create new Groups after the initial logging time.
  • Line plots now support synchronized zooming mode, where zooming to a given range on one plot automatically zooms into the same range on all other line plots with a common x-axis. Turn this on in the workspace display settings for line plots.
  • Line plots now support formatting custom metrics as timestamps. This is useful when synchronizing or uploading runs from a different system.
  • You can now slide through media panels using non-_step fields such as epoch or train/global_step (or anything else).
  • In Tables and plots in Query Panels that use runs or runs.history expressions, a step slider allows you to step through the progress on your metrics, text, or media through the course of your runs. The slider supports stepping through non-_step metrics.
  • You can now customize bar chart labels using a font size control.

Private preview

Private preview features are available by invitation only. To request enrollment in a private preview, contact support or your AISE.

  • Personal workspace templates allow you to save your workspace setup so it is automatically applied to your new projects. Initially, you can configure certain line plot settings such as the default X axis metric, smoothing algorithm, and smoothing factor.
  • Improved Exponentially-weighted Moving Average (EMA) smoothing provides more reliable smoothed lines when operating on complete, unbinned data. In most cases, smoothing is handled at the back end for improved performance.

Weave

  • Chat with fine-tuned models from within your W&B instance. Playground is now supported in Dedicated Cloud. Playground is a chat interface for comparing different LLMs on historical traces. Admins can add API keys to different model providers or hook up custom hosted LLM providers so your team can interact with them from within Weave.
  • Open Telemetry Support. Now you can log traces via OpenTelemetry (OTel). Learn more.
  • Weave tracing has new framework integrations: CrewAI, OpenAI’s Agent SDK, DSPy 2.x and Google’s genai Python SDK.
  • Playground supports new OpenAI models: GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano.
  • Build labeled datasets directly from traces, with your annotations automatically converted into dataset columns. Learn more.

Security

  • Registry admins can now designate a service account in a registry as either a Registry Admin or a Member. Previously, the service account’s role was always Registry Admin. Learn more.

Performance

  • Improved the performance of many workspace interactions, particularly in large workspaces. For example, expanding sections and using the run selector are significantly more responsive.

  • Improved Fork and Rewind Performance.

    Forking a run creates a new run that uses the same configuration as an existing run. Changes to the forked run do not the parent run, and vice versa. A pointer is maintained between the forked run and the parent. Rewinding a run lets you log new data from that point in time without losing the existing data.

    In projects with many nested forks, forking new runs is now much more efficient due to improvements in caching.

Fixes

  • Fixed a bug that could prevent an organization service account from being added to new teams.
  • Fixed a bug that could cause hover marks to be missing for grouped lines.
  • Fixed a bug that could include invalid project names in the Import dropdown of a Report panel.
  • Fixed a display bug in the alignment of filters in the run selector.
  • Fixed a page crash when adding a timestamp Within Last filter
  • Fixed a bug that could prevent the X-axis from being set to Wall Time in global line plot settings.
  • Fixed a bug that could prevent image captions from appearing when they are logged to a Table.
  • Fixed a bug that could prevent sparse metrics from showing up in panels.
  • In Run Overview pages, the Description field is now named Notes.

Patches

0.68.1

May 2, 2025

  • Fixed a bug introduced in v0.68.0 that could prevent media from loading in media panels.

0.68.2

May 7, 2025

  • Fixed a bug introduced in v0.68.0 that could cause background jobs to crash or run inconsistently. After upgrading to v0.68.2, affected background jobs will recover automatically. If you experience issues with background jobs after upgrading, contact Support.
  • Fixed a long-standing UI bug where typing an invalid regular expression into the W&B App search field could crash the app. Now if you type an invalid regular expression, it is treated as a simple search string, and you can update the search field and try again.
  • Fixed a bug where the SMTP port is set to 25 instead of the port specified in GORILLA_EMAIL_SINK.
  • Fixed a bug where inviting a user to a team could fail with the misleading error You have no available seats.

1.3 - 0.67.x

March 28, 2025

Features

  • In Reports, you can now give a run a custom display name per panel grid. This allows you to replace the run’s (often long and opaque) training-time name with one that is more meaningful to your audience. The report updates the name in all panel grids, helping you to explain your hard-won experimental insights to your colleagues in a concise and readable way. The original run name remain intact in the project, so doing this won’t disrupt your collaborators.
  • When you expand a panel in the workspace, it now opens in full screen mode with more space. In this view, line plots now render with more granular detail, using up 10,000 bins. The run selector appear next to the panel, letting you easily toggle, group, or filter runs in context.
  • From any panel, you can now copy a unique URL that links directly to that panel’s full screen view. This makes it even easier to share a link to dig into interesting or pathological patterns in your plots.
  • Run Comparer is a powerful tool you can use to compare the configurations and key metrics of important runs alongside their loss curves. Run Comparer has been updated:
    • Faster to add a Run Comparer panel, as an expanded option in Add Panels.
    • By default, a Run Comparer panel takes up more space, so you can see the values right away.
    • Improved readability and legibility of a Run Comparer panel. You can use new controls to quickly change row and column sizes so you can read long or nested values.
    • You can copy any value in the panel to your clipboard with a single click.
    • You can search keys with regular expressions to quickly find exactly the subset of metrics you want to compare across. Your search history is saved to help you iterate efficiently between views.
    • Run Comparer is now more reliable at scale, and handles larger workspaces more efficiently, reducing the likelihood of poor performance or a crashed panel.
  • Segmentation mask controls have been updated:
    • You can now toggle each mask type on or off in bulk, or toggle all masks or all images on or off.
    • You can now change each class’s assigned color, helping to avoid confusion if multiple classes use the same color.
  • When you open a media panel in full screen mode, you can now use the left or right arrows on your keyboard to step through the images, without first clicking on the step slider.
  • Media panels now color run names, matching the run selector. This makes it easier to associate a run’s media values with related metrics and plots.
  • In the run selector, you can now filter by whether a run has certain media key or not.
  • You can now move runs between groups in the W&B App UI, and you can create new groups after the run is logged.
  • Automations can now be edited in the UI
  • An automation can now notify a Slack channel for artifact events. When creating an automation, select “Slack notification” for the Action type.
  • Registry now supports global search by default, allowing you to search across all registries by registry name, collection name, alias, or tag.
  • In Tables and Query panels that use the runs expression, you can use the new Runs History step slider and drop-down controls to view a table of metrics at each step of a run.
  • Playground in Weave supports new models: OpenAI’s gpt-4.5-preview and Deepseek’s deepseek-chat and deepseek-reasoner.
  • Weave tracing has two new agent framework integrations: CrewAI and OpenAI’s Agent SDK.
  • In the Weave UI, you can now build Datasets from traces. Learn more: https://weave-docs.wandb.ai/guides/core-types/datasets#create-edit-and-delete-a-dataset-in-the-ui
  • The Weave Python SDK now provides a way to filter the inputs and outputs of your Weave data to ensure sensitive data does not leave your network perimeter. You can configure to redact sensitive data. Learn more: https://weave-docs.wandb.ai/guides/tracking/redact-pii/
  • To streamline your experience, the System tab in the individual run workspace view will be removed in an upcoming release. View full information about system metrics in the System section of the workspace. For questions, contact support@wandb.com.

Security

  • golang crypto has been upgraded to v0.36.0.
  • golang oauth2 has been upgraded to v0.28.0.
  • In Weave, pyarrow is now pinned to v17.0.0.

Performance

  • Frontend updates significantly reduce workspace reload times by storing essential data in the browser cache across visits. The update optimizes loading of saved views, metric names, the run selector, run counts, W&B’s configuration details, and the recomputation of workspace views.
  • Registry overview pages now load significantly faster.
  • Improved the performance of selecting metrics for the X, Y, or Z values in a scatter plot in a workspace with thousands of runs or hundreds of metrics.
  • Performance improvements to Weave evaluation logging.

Fixes

  • Fixed a bug in Reports where following a link to a section in the report would not open to that section.
  • Improved the behavior of how Gaussian smoothing handles index reflection, matching SciPy’s default “reflect” mode.
  • A Report comment link sent via email now opens directly to the comment.
  • Fixed a bug that could crash a workspace if a sweep takes longer than 2 billion compute seconds by changing the variable type for sweep compute seconds to int64 rather than int32.
  • Fixed display bugs that could occur when a report included multiple run sets.
  • Fixed a bug where panels Quick Added to an alphabetically sorted section were sorted incorrectly.
  • Fixed a bug that generated malformed user invitation links.

1.4 - 0.66.x

March 06, 2025

Features

  • In tables and query panels, columns you derive from other columns now persist, so you can use them for filtering or in query panel plots.

Security

  • Limited the maximum depth for a GraphQL document to 20.
  • Upgraded pyarrow to v17.0.0.

1.5 - 0.65.x

January 30, 2025

Features

  • From a registry’s Settings, you can now update the owner to a different user with the Admin role. Select Owner from the user’s Role menu.
  • You can now move a run to a different group in the same project. Hover over a run in the run list, click the three-vertical-dots menu, and choose Move to another group.
  • You can now configure whether the Log Scale setting for line plots is enabled by default at the level of the workspace or section.
    • To configure the behavior for a workspace, click the action ... menu for the workspace, click Line plots, then toggle Log scale for the X or Y axis.
    • To configure the behavior for a section, click the gear icon for the section, then toggle Log scale for the X or Y axis.

1.6 - 0.63.x

December 10, 2024

Features

Weave is now generally available (GA) in Dedicated Cloud on AWS. Reach out to your W&B team if your teams are looking to build Generative AI apps with confidence and putting those in production.

Image showing the Weave UI

The release includes the following additional updates:

  • W&B Models now seamlessly integrates with Azure public cloud. You could now create a Dedicated Cloud instance in an Azure region directly from your Azure subscription and manage it as an Azure ISV resource. This integration is in private preview.
  • Enable automations at the Registry level to monitor changes and events across all collections in the registry and trigger actions accordingly. This eliminates the need to configure separate webhooks and automations for individual collections.
  • Ability to assign x_label, e.g. node-0, in run settings object to distinguish logs and metrics by label, e.g. node, in distributed runs. Enables grouping system metrics and console logs by label for visualization in the workspace.
  • Coming soon with a patch release this week, you will be able to use organization-level service accounts to automate your W&B workloads across all teams in your instance. You would still be able to use existing team-level service accounts if you would like more control over the access scope of a service account.
    • Allow org-level service accounts to interact with Registry. Such service accounts can be invited to a registry using the invite modal and are displayed in the members table along with respective organization roles.

Fixes

  • Fixed an issue where users creating custom roles including the Create Artifact permission were not able to log artifacts to a project.
  • Fixed the issue with metadata logging for files in instances that have subpath support configured for BYOB.
  • Block webhook deletion if used by organization registry automations.

1.7 - Archived Releases

Archived releases have reached end of life and are no longer supported. A major release and its patches are supported for six months from the initial release date. Release notes for archived releases are provided for historical purposes. For supported releases, refer to Releases.

1.7.1 -

This release is no longer supported. A major release and its patches are supported for six months from the initial release date.

Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.

1.7.2 - 0.61.0

October 17, 2024

Features

This is a mini-feature and patch release, delivered at a different schedule than the monthly W&B server major releases

  • Organization admins can now configure Models seats and access control for both Models & Weave in a seamless manner from their organization dashboard. This change allows for a efficient user management when Weave is enabled for a Dedicated Cloud or Self-managed instance.
    • Weave pricing is consumption-based rather than based on number of seats used. Seat management only applies to the Models product.
  • You can now configure access roles at the project level for team and restricted scoped projects. It allows assigning different access roles to a user within different projects in the same team, and thus adding another strong control to conform to enterprise governance needs.

Fixes

  • Fixed an issue where underlying database schema changes as part of release upgrades could timeout during platform startup time.
  • Added more performance improvements to the underlying parquet store service, to further improve the chart loading times for users. Parquet store service is only available on Dedicated Cloud, and Self-managed instances based on W&B kubernetes operator.
  • Addressed the high CPU utilization issue for the underlying parquet store service, to make the efficient chart loading more reliable for users. Parquet store service is only available on Dedicated Cloud, and Self-managed instances based on W&B kubernetes operator.

1.7.3 - 0.60.0

September 26, 2024

Features

  • Final updates for 1.1.1 Compliance of Level AA 2.2 for Web Content Accessibility Guidelines (WCAG) standards.
  • W&B can now disable auto-version-upgrade for customer-managed instances using the W&B kubernetes operator. You can request this to your W&B team.
    • Note that W&B requires all instances to upgrade periodically to comply with the 6-month end-of-life period for each version. W&B does not support versions older than 6 months.

Fixes

  • Fixed a bug to allow instance admins on Dedicated Cloud and Customer-managed instances to access workspaces in personal entities.
  • SCIM Groups and Users GET endpoints now filter out service accounts from the responses. Only non service account users are now returned by those endpoints.
  • Fixed a user management bug by removing the ability of team admins to simultaneously delete a user from the overall instance while deleting them from a team. Instance or Org admins are responsible to delete a user from the overall instance / organization.

Performance improvements

  • Reduced the latency when adding a panel by up to 90% in workspaces with many metrics.
  • Improved the reliability and performance of parquet exports to blob storage when runs are resumed often.
    • Runs export to blob storage in parquet format is available on Dedicated Cloud and on Customer-managed instances that are enabled using the W&B kubernetes operator.

1.7.4 - 0.58.1

September 04, 2024

Features

  • W&B now supports sub-path for Secure storage connector i.e. Bring your own bucket capability. You can now provide a sub-path when configuring a bucket at the instance or team level. This is only available for new bucket configurations and not for existing configured buckets.
  • W&B-managed storage on newer Dedicated Cloud instances in GCP & Azure will by default be encrypted with W&B managed cloud-native keys. This is already available on AWS instances. Each instance storage is encrypted with a key unique to the instance. Until now, all instances on GCP & Azure relied on default cloud provider-managed encryption keys.
  • Makes the fields in the run config and summary copyable on click.
  • If you’re using W&B kubernetes operator for a customer-managed instance, you can now optionally use a custom CA for the controller manager.
  • We’ve modified the W&B kubernetes operator to run in a non-root context by default, aligning with OpenShift’s Security Context Constraints (SCCs). This change ensures smoother deployment of customer-managed instances on OpenShift by adhering to its security policies.

Fixes

  • Fixed an issue where exporting panels from a workspace to a report now correctly respects the panel search regex.
  • Fixed an issue where setting GORILLA_DISABLE_PERSONAL_ENTITY to true was not disabling users from creating projects and writing to existing projects in their personal entities.

Performance improvements

  • We have significantly improved performance and stability for experiments with 100k+ logged points. If you’ve a customer-managed instance, this is available if the deployment is managed using the W&B kubernetes operator.
  • Fixed issue where saving changes in large workspaces would be very slow or fail.
  • Improved latency of opening workspace sections in large workspaces.

1.7.5 - 0.57.2

July 24, 2024

Features

You can now use JWTs (JSON Web Tokens) to access your W&B instance from the wandb SDK or CLI, using the identity federation capability. The feature is in preview. Refer to Identity federation and reach out to your W&B team for any questions.

The 0.57.2 release also includes these capabilities:

  • New Add to reports drawer improvements for exporting Workspace panels into Reports.
  • Artifacts metadata filtering in the artifact project browser.
  • Pass in artifact metadata in webhook payload via ${artifact_metadata.KEY}.
  • Added GPU memory usage panels to the RunSystemMetrics component, enhancing GPU metrics visualization for runs in the app frontend.
  • Mobile users now enjoy a much smoother, more intuitive Workspace experience.
  • If you’re using W&B Dedicated Cloud on GCP or Azure, you can now enable private connectivity for your instance, thus ensuring that all traffic from your AI workloads and optionally browser clients only transit the cloud provider private network. Refer to Private connectivity and reach out to your W&B team for any questions.
  • Team-level service accounts are now shown separately in a new tab in the team settings view. The service accounts are not listed in the Members tab anymore. Also, the API key is now hidden and can only be copied by team admins.
  • Dedicated Cloud is now available in GCP’s Seoul region.

Fixes

  • Gaussian smoothing was extremely aggressive on many plots.
  • Fixed issue where pressing the Ignore Outliers in Chart Scaling button currently has no effect in the UI workspace.
  • Disallow inviting deactivated users to an organization.
  • Fixed an issue where users added to an instance using SCIM API could not onbioard successfully.

Performance improvements

  • Significantly improved performance when editing a panel’s settings and applying the changes.
  • Improved the responsiveness of run visibility toggling in large workspaces.
  • Improved chart hovering and brushing performance on plots in large workspaces.
  • Reduced workspace memory usage and loading times in workspaces with many keys.

1.7.6 - 0.56.0

June 29, 2024

Features

The new Full Fidelity line plot in W&B Experiments enhances the visibility of training metrics by aggregating all data along the x-axis, displaying the minimum, maximum, and average values within each bucket, allowing users to easily spot outliers and zoom into high-fidelity details without downsampling loss. Learn more in our documentation.

The 0.56.0 release also includes these capabilities:

Fixes

  • The fix resolves an issue where deleting a search term from a runset in a report could delete the panel or cause the report to crash by ensuring proper handling of selected text during copy/paste operations.
  • The fix addresses a problem with indenting bulleted items in reports, which was caused by an upgrade of slate and an additional check in the normalization process for elements.
  • The fix resolves an issue where text could not be selected from a panel when the report was in edit mode.
  • The fix addresses an issue where copy-pasting an entire panel grid in a Report using command-c was broken.
  • The fix resolves an issue where report sharing with a magic link was broken when a team had the Hide this team from all non-members setting enabled.
  • The fix introduces proper handling for restricted projects by allowing only explicitly invited users to access them, and implementing permissions based on project members and team roles.
  • The fix allows instance admins to write to their own named workspaces, read other personal and shared workspaces, and write to shared views in private and public projects.
  • The fix resolves an issue where the report would crash when trying to edit filters due to an out-of-bounds filter index caused by skipping non-individual filters while keeping the index count incremental.
  • The fix addresses an issue where unselecting a runset caused media panels to crash in a report by ensuring only runs in enabled runsets are returned.
  • The fix resolves an issue where the parameter importance panel crashes on initial load due to a violation of hooks error caused by a change in the order of hooks.
  • The fix prevents chart data from being reloaded when scrolling down and then back up in small workspaces, enhancing performance and eliminating the feeling of slowness.

1.7.7 - 0.54.0

May 24, 2024

Features

  • You can now configure Secure storage connector (BYOB) at team-level in Dedicated Cloud or Self-managed instances on Microsoft Azure.
  • Organization admins can now enforce privacy settings across all W&B teams by setting those at the organization level, from within the Settings tab in the Organization Dashboard.
    • W&B recommends to notify team admins and other users before making such enforcement changes.
  • Enable direct lineage option for artifact lineage DAG
  • It’s now possible to restrict Organization or Instance Admins from self-joining or adding themselves to a W&B team, thus ensuring that only Data & AI personas have access to the projects within the teams.
    • W&B advises to exercise caution and understand all implications before enabling this setting. Reach out to your W&B team for any questions.
  • Dedicated Cloud on AWS is now also available in the Seoul (S. Korea) region.

Fixes

  • Fix issue where Reports where failing to load on Mobile.
  • Fix link to git diff file in run overview.
  • Fixed the intermittently occurring issue related to loading of Organization Dashboard for certain users.

1.7.8 - 0.52.2

April 25, 2024

Features

  • You can now enforce username and full name for users in your organization, by using OIDC claims from your SSO provider. Reach out to your W&B team or support if interested.
  • You can now disable use of personal projects in your organization to ensure that all projects are created within W&B teams and governed using admin-enforced guidelines. Reach out to your W&B team or support if interested.
  • Option to expand all versions in a cluster of runs or artifacts in the Artifacts Lineage DAG view.
  • UI improvements to Artifacts Lineage DAG - the type will now be visible for each entry in a cluster.

Fixes

  • Added pagination to image panels in media banks, displaying up to 32 images per page with enhanced grid aesthetics and improved pagination controls, while introducing a workaround for potential offset inconsistencies.
  • Resolved an issue where tooltips on system charts were not displaying by enforcing the isHovered parameter, which is essential for the crosshair UI visibility.
  • Unset the max-width property for images within media panels, addressing unintended style constraints previously applied to all images.
  • Fixed broken config overrides in launch drawer.
  • Fixed Launch drawer’s behavior when cloning from a run.

1.7.9 - 0.51.0

March 20, 2024

Features

You can now save multiple views of any workspace by clicking “Save as a new view” in the overflow menu of the workspace bar.

Learn more about how Saved views can enhance your team’s collaboration and project organization.

Image showing saved views

The release also includes these capabilities:

  • You can now set a project’s visibility scope to Restricted if you want to collaborate on AI workflows related to sensitive or confidential data.
    • When you create a restricted project within a team, you can add specific members from the team. Unlike other project visibility scopes, all members of a team do not get implicit access to a restricted project.
  • Enhanced Run Overview page performance: now 91% faster on load, with search functionality improved by 99.9%. Also enjoy RegEx search for Config and Summary data.
  • New UX for Artifacts Lineage DAG introduces clustering for 5+ nodes at the same level, preview window to examine a node’s details, and a significant speedup in the graph’s loading time.
  • The template variable values used for a run executed by launch, for example GPU type and quantity, are now shown on the queue’s list of runs. This makes it easier to see which runs are requesting which resources.
  • Cloning a run with Launch now pre-selects the overrides, queue, and template variable values used by the cloned run.
  • Instance admins will now see a Teams tab in the organization dashboard. It can be used to join a specific team when needed, whether it’s to monitor the team activity as per organizational guidelines or to help the team when team admins are not available.
  • SCIM User API now returns the groups attribute as part of the GET endpoint, which includes the id of the groups / teams a user is part of.
  • All Dedicated Cloud instances on GCP are now managed using the new W&B Kubernetes Operator. With that, the new Parquet Store service is also available.
    • Parquet store allows performant & cost efficient storage of run history data in parquet format in the blob storage. Dedicated Cloud instances on AWS & Azure are already managed using the operator and include the parquet store.
  • Dedicated Cloud instances on AWS have been updated to use the latest version of the relational data storage, and the compute infrastructure has been upgraded to a newer generation with better performance.

Advanced Notice: We urge all customers who use Webhooks with Automations to add a valid A-record for their endpoints as we are going to disallow using IP address based Webhook URLs from the next release onwards. This is being done to protect against SSRF vulnerability and other related threat vectors.

Fixes

  • Fixed issue where expressions tab was not rendering for line plots.
  • Use display name for sweeps when grouped by sweeps in charts and runs table.
  • Auto navigation to runs page when selecting job version.

1.7.10 - 0.50.2

February 26, 2024

Feature

  • Add panel bank setting to auto-expand search results
  • Better visibility for run queue item issues
  • Dedicated Cloud customers on AWS can now use Privatelink to securely connect to their deployments.
    • The feature is in private preview and will be part of an advanced pricing tier at GA. Reach out to your W&B team if interested.
  • You can now automate user role assignment for organization or team scopes using the SCIM role assignment API
  • All Dedicated Cloud instances on AWS & Azure are now managed using the new W&B Kubernetes Operator. With that, the new Parquet Store service is also available. The service allows for performant & cost efficient storage of run history data in parquet format in the blob storage. That in turn leads to faster loading of relevant history data in charts & plots that are used to evaluate the runs.
  • W&B Kubernetes Operator and along with that the Parquet Store service are now available for use in customer-managed instances. We encourage customers that already use Kubernetes to host W&B, to reach out to their W&B team on how they can use the operator. And we highly recommend others to migrate to Kubernetes in order to receive the latest performance improvements and new services in future via operator. We’re happy to assist with planning such a migration.

Fixes

  • Properly pass template variables through sweep scheduler
  • Scheduler polluting sweep yaml generator
  • Display user roles correctly on team members page when search or sort is applied
  • Org admins can again delete personal projects in their Dedicated Cloud or Self-managed server instance
  • Add validation for SCIM GET groups API for pending users

1.7.11 - 0.49.0

January 18, 2024

Feature

  • Set a default TTL (time-to-live or scheduled deletion) policy for a team in the team settings page.
    • Restrict setting or editing of a TTL policy to either of admin only or admin plus members.
  • Test and debug a webhook during webhook creation or after in the team settings UI.
    • W&B will send a dummy payload and display the receiving server’s response.
  • View Automation properties in the View Details slider.
    • This includes a summary of the triggering event and action, action configs, creation date, and a copy-able curl command to test webhook automations.
  • Replace agent heartbeat with last successful run time in launch overview.
  • Service accounts can now use the Report API to create reports.
  • Use the new role management API to automate managing the custom roles.
  • Enable Kubernetes Operator for Dedicated Cloud deployments on AWS.
  • Configure a non-conflicting IP address range for managed cache used in Dedicated Cloud deployments on GCP.

Fixes

  • Update the add runset button clickable area in reports
  • Show proper truncate grouping message
  • Prevent flashing of publish button in reports
  • Horizonal Rule get collapsed in report section
  • Add section button hidden in certain views
  • Allow things like semantic versioning in the plot as string
  • Remove requirements for quotes when using template variables in queue config definitions
  • Improve Launch queue sorting order
  • Don’t auto-open panel sections when searching large workspaces
  • Change label text for grouped runs
  • Open/close all sections while searching

1.7.12 - 0.48.0

December 20, 2023

Feature

  • All required frontend changes for launch prioritization
    • Refer to this blog on how you can run more important jobs than others using Launch.
  • Refer to below changes for access control and user attribution behavior of team service accounts:
    • When a team is configured in the training environment, a service account from that team can be used to log runs in either of private or public projects within that team, and additionally attribute those runs to a user only if WANDB_USERNAME or WANDB_USER_EMAIL variable is configured in the environment and the user is part of that team.
    • When a team is not configured in the training environment and a service account is used, the runs would be logged to the named project within the service account’s team, and those would be attributed to a user only if WANDB_USERNAME or WANDB_USER_EMAIL variable is configured in the environment and the user is part of that team.
    • A team service account can not log runs in a private project in another team, but it can log runs to public projects in other teams.

Fixes

  • Reduce column widths for oversized runs selectors
  • Fix a couple of bugs related to Custom Roles preview feature

1.7.13 - 0.47.3

December 08, 2023

Fixes

We’re releasing a couple of important fixes for the Custom Roles preview capability that was launched as part of v0.47.2. If you’re interested in using that feature to create fine-grained roles and better align with least privilege principle, please use this latest server release and reach out to your Weights & Biases team for an updated enterprise license.

1.7.14 - 0.47.2

December 01, 2023

Feature

Use custom roles with specific permissions to customize access control within a team

  • Available in preview to enterprise customers. Please reach out to your Weights & Biases account team or support for any questions.

Image showing the new UI for custom roles

Also:

  • Minor runs search improvements
  • Auto-resize runs search for long texts
  • View webhook details, including URL, secrets, date created directly from the automations table for webhook automations

Fixes

  • Grouping of runs when group value is a string that looks like a number
  • Janky report panel dragging behavior
  • Update bar chart spec to match the one on public cloud
  • Clean up panel padding and plot margins
  • Restores workspace settings beta

1.7.15 - 0.46.0

November 15, 2023

Features

  • Deployments on AWS can now use W&B Secrets with Webhooks and Automations
    • Secrets are stored securely in AWS Secret Manager - please use the terraform-aws-wandb module to provision one and
  • Update webhooks table to display more information
  • Better truncation of long strings to improve the usability of strings in the UI
  • Reduce delay for scroll to report section
  • Add white background to weave1 panels
  • Allow deep link for weave1 panels in reports
  • Allow weave1 panel resizing in reports
  • Homepage banner will now show CLI login instructions
  • User invites will now succeed if invite email can’t be sent for some reason
  • Add list of associated queues to agent overview page

Fixes

  • Copy function on panel overlay was dropping values
  • CSS cleanup for import modal when creating report
  • Fixes regression to hide legend when toggled off
  • Report comment highlighting
  • Remove all caching for view’s LoadMetadataList()
  • Let run search stretch
  • Associate launch agents with user id from X-WANDB-USERNAME header

1.7.16 - 0.45.0

October 25, 2023

Features

  • Enable artifact garbage collection using environment variable GORILLA_ARTIFACT_GC_ENABLED=true and cloud object versioning or soft deletion.
  • The terraform module terrraform-azurerm-wandb now supports Azure Key Vault as a secrets store.
    • Deployments on Azure can now use W&B Secrets with Webhooks and Automations. Secrets are stored securely in Azure Key Vault.

Fixes

  • Remove invalid early exit preventing history deletion
  • When moving/copying runs, don’t drop key-set info
  • Update mutations to no longer use defunct storage plan or artifacts billing plan at all
  • Respect skip flag in useRemoteServer

1.7.17 - 0.44.1

October 12, 2023

Features

Add OpenAI proxy UI to SaaS and Server

Image showing the new OpenAI proxy UI

Also:

  • New version v1.19.0 of our AWS Terraform module terraform-google-wandb is available
  • Add support for AWS Secret Manager for Customer Secret Store, which can be enabled after the terraform module terrraform-aws-wandb is updated and released
  • Add support for Azure Key Vault for Customer Secret Store, which can be enabled after the terraform module terrraform-azurerm-wandb is updated and released

Fixes

  • Quality-of-life improvements in the model registry ui
  • int values no longer ignored when determining if a run achieved a sweep’s optimization goal
  • Cache runs data to improve workspace loading perf
  • Improve TTL rendering in collection table
  • Allow service accounts to be made workflow (registry) admins
  • Add tooltip for truncated run tags in workspaces
  • Fix report page scrolling
  • Copy y data values for chart tooltip
  • Query secrets for webhooks in local
  • Fixing broken domain zoom in panel config
  • Hide Customer Secret Store UI if GORILLA_CUSTOMER_SECRET_STORE_SOURCE env var not set

Chores

  • Bump langchain to latest
  • Adding WB Prompts to quickstart
  • Update AWS MIs to use terraform-kubernetes-wandb v1.12.0
  • Show correct Teams Plan tracked hours teams settings page and hide on usage page

1.7.18 - 0.43.0

October 02, 2023

Release 0.43.0 contains a number of minor bug fixes and performance improvements, including fixing the bottom of runs tables when there’s a scrollbar. Check out the other fixes below:

Demo of fixed Runs table

Fixes

  • Dramatically improve workspace loading perf
  • Fixing broken docs link in disabled add panel menu
  • Render childPanel without editor in report
  • Copying text from a panel grid when editing
  • Run overview crashing with ’length’ key
  • Padding for bottom of runs table when there’s a scrollbar
  • Eliminate unnecessary history key cache read
  • Error handling for Teams Checkout modal
  • Memory leak, excess filestream sending, and orphaned processes in Weave Python autotracer

1.7.19 - 0.42.0

September 14, 2023

Features

W&B Artifacts now supports time-to-live (TTL) policies

Image illustrating TTL policies for artifacts

Users can now gain more control over deleting and retention of Artifacts logged with W&B, with the ability to set retention and time-to-live (TTL) policies! Determine when you want specific Artifacts to be deleted, update policies on existing Artifacts, and set TTL policies on upstream or downstream Artifacts.

Here are the other new features include in this release:

  • Use Launch drawer when creating Sweeps
  • Delete run queue items
  • Min/max aggregations nested dropdown
  • Allow users to connect multiple S3-compatible buckets
  • Add disk i/o system metrics
  • Use the legacy way to set permissions
  • Enable CustomerSecretStore
  • Add Kubernetes as a backend for CustomerSecretStore

Fixes

  • Disable storage and artifact invoices for ongoing storage calculations refractors
  • Panel deletion bug
  • Remove link-version event type from project automation slider
  • Remove upper case styling for artifact type names
  • Keep uncolored tags from changing color on render
  • Stale defaults stuck in Launch drawer on reopen
  • Trigger alias automations while creating artifact
  • Edge case failure in infinite loading tag filters

1.7.20 - 0.41.0

August 28, 2023

Features

New Launch landing page

Image showing the new Launch landing page

We’ve updated the Launch home page, so users looking to get started with Launch will have a much easier way to get setup quickly. Easily access detailed documentation, or simply follow the three Quickstart steps to create a Launch queue, agent, and start launching jobs immediately.

Here are the other new features included in this release:

  • Add new reverse proxy to track OpenAI requests and responses
  • Show agent version on agent overview page
  • New model registry workflow removed from feature flag for all users

Fixes

  • Empty projects causing infinite load on storage explorer
  • Runs marked failed when run queue items are failed
  • Use correct bucket for storing OpenAI proxy artifacts
  • SEO tags not properly rendered by host
  • Trigger export in background, on context deadline as well
  • Transition runs in pending state to running when run is initialized
  • Query so Launch queues show most recent completed and failed jobs

1.7.21 - 0.40.0

August 18, 2023

Features

Webhooks

Image showing webhook configuration

Enable a seamless model CI/CD workflow using Webhook Automations to trigger specific actions within the CI/CD pipeline when certain events occur. Use webhooks to facilitate a clean hand-off point between ML engineering and devops. To see this in practice for Model Evaluation and Model Deployment, check out the linked demo videos. Learn more in our docs.

New user activity dashboard on for all customers

Fixes

  • Removed limit on number of registered models an organization could have.
  • Added search history to workspaces to make it easier to find commonly used plots.
  • Changed reports “like” icon from hearts to stars.
  • Users can now change the selected run in a workspace view with a group of runs.
  • Fixed issue causing duplicate panel grids.
  • Users can now pass in per-job resource config overrides for Sweeps on Launch
  • Added redirect from /admin/users to new organization dashboard.
  • Fixed issues with LDAP dropping connections.
  • Improvements to run permadeletion.

1.7.22 - 0.39.0

July 27, 2023

Features

Revamped Organization Dashboard

Image showing revamped Organization Dashboard

We’ve made it easier to see who’s making the most W&B with our overhauled Organization Dashboard, accessible to W&B admins. You can now see details on who’s created runs and reports, who’s actively using W&B, who’s invites are pending–and you can export all this in CSV to share across your organization. Learn more in the docs.

For Dedicated Cloud customers, this feature has been turned on. For Customer-Managed W&B customers, contact W&B support and we’ll be happy to work with you to enable it.

Fixes

  • Restrict service API keys to team admins
  • Launch agent configuration is now shown on the Agents page
  • Added navigation panel while viewing a single Launch job.
  • Automations can now show configuration parameters for the associated job.
  • Fixed issue with grouped runs not live updating
  • Removed extra / in magic and normal link url
  • Check base for incremental artifacts
  • Inviting a user into multiple teams will no longer take up too many seats in the org

1.7.23 - 0.38.0

July 13, 2023

Features

Metric visualization enhancements

Image showing metric visualization enhancements

We’re continuing to enhance our core metric visualization experience. You can now define which metrics from regular expressions to render in your plots, up to 100 metrics at once. And to more accurately represent data at high scale, we’ve add a new time-weighted exponential moving average smoothing algorithm for plots (check out all of our supported algorithms).

Feedback surveys

W&B has always built our product based on customer feedback. Now, we’re happy to introduce a new way for you to shape the future of W&B: in-app feedback surveys in your Dedicated Cloud or Customer-Managed W&B install. Starting July 17th, W&B users will start periodically seeing simple 1 - 10 Net Promoter Score surveys in the application. All identifying information is anonymized. We appreciate all your feedback and look forward to making W&B even better, together.

Fixes

  • Major improvement to artifact download speed: over a 6x speedup on our 1-million-file artifact benchmark. Please upgrade to SDK version 0.15.5+.
  • (Launch) Optuna is now available as a sweeps scheduler with Sweeps on Launch, allowing more efficient exploration of hyperparameters.
  • Run data permadeletion is now available (default off). This can be enabled with the GORILLA_DATA_RETENTION_PERIOD environment variable, specified in hours. Please take care before updating this variable and/or chat with W&B Support, since the deletion is permanent. Artifacts will not be deleted by this setting.
  • Updated report sharing emails to include a preview.
  • Relaxed HTML sanitation rules for reports in projects; this had been causing rare problems with report rendering.
  • Expanded the maximum number of metrics that can be matched by a regex in chart configuration; previously this had been always 10, the maximum is now 100.
  • Fixed issue with media panel step slider becoming unsynced with the media shown.
  • Added time-weighted exponential moving average as an option for smoothing in plots.
  • The “Search panels” textbox in workspaces now preserves the user’s last search.
  • Applying a username filter when runs are grouped will no longer error.
  • (Launch) The loading of the Launch tab should now be much faster, typically under two seconds.
  • (Launch) There’s now an option to edit queue configs using YAML instead of JSON. It’s also now more clear how to edit queue configs.
  • (Launch) Runs will now show error messages in the UI when they crash or fail.
  • (Launch) If you don’t specify a project when creating a job, we’ll now use the value for WANDB_PROJECT from your wandb.init.
  • (Launch) Updated support for custom accelerator images—these will run in noninteractive mode when building, which had been blocking some images.
  • (Launch) Fixed issue where the run author for sweeps was the agent service account, rather than the real author
  • (Launch) Clicking outside the Launch drawer will no longer close the drawer automatically.
  • (Launch) Fixed issue where training jobs that had been enqueued by a sweep but not run yet were not correctly removed from the queue if you later stopped the sweep.
  • (Launch) The Launch navigation link is now hidden for users who aren’t part of the team.
  • (Launch) Fixed formatting and display issues on Agent logs.
  • Fixed scrolling, resizing, and cloning issues in Automations panel.
  • Fixed pagination on artifact action history.
  • Added support for pre-signed URLs using a VPC endpoint URL if the AWS_S3_ENDPOINT_URL env var is set and passed in from the SDK side.
  • Fixed enterprise dashboard link when organization name contains “&”
  • Updated tag colors to be consistent

1.7.24 - 0.36.0

June 14, 2023

Features

Clone Runs with Launch Image showing cloning a run with Launch

If you want to repeat a run but tweak a couple hyperparameters–say bump the batch size to take advantage of a larger machine–it’s now easy to clone a run using W&B Launch. Go to the run overview, click Clone, and you’ll be able to select new infrastructure to execute the job on, with new hyperparameters. Learn more in the Launch documentation.

Fixes

  • Added report creation and update action to audit logs.
  • Artifacts read through the SDK will now be captured in the audit logs.
  • In report creation, added button to select all plots to add to the new report
  • New view-only users signing up via a report link will now be fast tracked to the report, rather than going through the normal signup process.
  • Team admins can now add protected aliases.
  • Improved media panel handling of intermediate steps.
  • Removed inactive ‘New Model’ button from Model Registry homepage for anonymous users
  • Ability to copy data from plot legends has been rolled out to all users.
  • Fixed incorrect progress indicator in Model Registry onboarding checklist.
  • Fixed issue where the Automations page could crash when job name had slashes.
  • Fixed issue where a user could update the wrong user profiles.
  • Added option to permanently delete runs and their associated metrics after a duration specified in an environment variable.

1.7.25 - 0.35.0

June 07, 2023

Security

Fixed issue where API keys were logged for recently logged in users. Check for FetchAuthUserByAPIKey in the logs which you can find in gorilla.log from a debug bundle and rotate any keys that are found.

Features

Launch Agent Logs Now in the GUI

Image showing Launch agent logs in GUI

W&B Launch allows you to push machine learning jobs to a wide range of specialized compute environments. With this update, you can now use W&B to monitor and debug jobs running in these remote environments, without needing to log into your AWS or GCP console.

Fixes

  • Logs tab is no longer trimmed to 1000 rows.
  • Fixed scenario where artifact files pagination could get into an infinite loop
  • Fixed bug where success toast messages were not appearing
  • The Runs table will now correctly show the git commit value

1.7.26 - 0.34.0

May 31, 2023

Features

New Model Registry UI

Image showing new Model Registry UI

We’re making it easier for users to manage a long list of models, and navigate seamlessly between entities in the model registry. With this new UI, users can:

  • Look at all your registered models
  • Filter to registered models within a specific team
  • With the new list view, users can expand each panel to see the individual versions inside of it, including each version’s aliases, and metadata or run metrics. Clicking on a version from this quick view can take you to it’s version-view
  • Look at an overview directly by clicking “View Details”
  • See a preview of how many version, consumers, and automations are present for each registered model
  • Create Automations directly
  • See some metadata columns and details in preview
  • Change Model Access Controls

Fixes

  • Improved search functionality for better universal search ranking results.
  • Added functionality to add/delete multiple tags at once in the model registry
  • Enhanced the FileMarkdown feature to correctly scroll long content.
  • Made the default team selection dropdown scrollable.
  • Removed the UI access restriction for Tier 1/2/3 plans based on tracked hour usage.
  • Added tooltips to for LLM trace viewer spans
  • LLM trace timeline/detail now splits horizontally in fullscreen
  • Added entity / team badges to Model Registry entries.
  • Improved the navigation bar experience for logged out users
  • Disabled storage/artifact banners to avoid issue where UI blocks for orgs with excess artifacts.
  • Fixed issues where user avatars were not being displayed correctly.
  • Fixed issue using Launch with Azure Git URLs
  • Launch configuration boxes now work in airgapped environments
  • In Launch queue creation, show teams as disabled (rather than hidden) for non-admins.
  • Fixed issue with embedding projector rendering
  • Fixes issue that prevented users from being able to reset their password in some cases involving mixed-case usernames.
  • Files with special characters now show up in the media panel in Azure
  • Added the ability to override the inline display format for timestamps.
  • Reports with custom charts now load when not logged in.
  • Wide GIFs no longer overflow fullscreen view
  • Increase default automations limit from 20 to 200.
  • Fixed bug allowing the appearance of deleting the version alias of a registered model (in fact, this could not be deleted on the backend).

1.7.27 - 0.33.0

May 10, 2023

Features

Prompts: Zoom and pan

Demo of zooming and panning

Explore complex chains of LLM prompts more easily with new zoom and pan controls in our prompts tracer.

Model registry admin role

Image showing Model registry admin role

Control your model promotion process with a new role for model registry admins. These users can manage the list of protected aliases (for example, “challenger” or “prod”), as well as apply or remove protected aliases for model versions.

Viewer role

You can now share your W&B findings with a broader audience with the introduction of a Viewer role for W&B Server. Users with this role can view anything their team(s) make, but not create, edit, or delete anything. These seats are measured separately from traditional W&B Server seats, so reach out your W&B account team to request an updated license.

Team admins can now disable magic link sharing for a team and its members. Disable public sharing on the team setting allows you increase team privacy controls. Meanwhile, it’s now easier for users who receive a report link to access the report in W&B after signing up.

Improved report composition

Image showing improved report composotion

Reports help share your findings W&B further throughout an organization, including with people outside the ML team. We’ve made several investments to ensure it’s as simple and frictionless as possible to create and share them—including an improved report drafting experience with enhanced draft publication, editing, management, and sharing UX to improve how teams collaborate with Reports.

Updated navigation

As W&B has expanded the parts of the ML workflow we cover, we’ve heard your feedback that it can be hard to move around the application. So we’ve updated the navigation sidebar to include clearer labels on the product area, and added backlinks to certain detail screens. We’ve also renamed “Triggers” to “Automations” to better reflect the power of the feature.

Fixes

  • When hovering over a plot in workspaces or a report, you can now use Cmd+C or Ctrl+C to copy run names and plot values shown in the hover control.
  • Changes to default workspaces are now no longer auto-saved.
  • Metrics in the Overview → Summary section now are formatted with commas.
  • Added an install-level option to allow non-admin users to create teams (default off; contact W&B support to enable it).
  • Weave plots now support log scales.
  • The Launch panel can now be expanded horizontally to give more space for viewing parameters.
  • The Launch panel now indicates whether a queue is active
  • The Launch panel now allows you to choose a project for the run to be logged in.
  • Launch queues can now only be created by team admins.
  • Improved Markdown support in Launch panel.
  • Improved error message on empty Launch queue configurations.
  • Filters on the Sweeps parallel coordinates plot will now apply to all selected runsets.
  • Sweeps now no longer require a metric.
  • Added support for tracking reference artifact files saved outside W&B in Azure Blob Storage.
  • Fixed bug in Markdown editing in Reports
  • Fullscreen Weave panels can now share config changes with the original panel.
  • Improved display of empty tables
  • Fixed bug in which the first several characters of logs were cut off

1.8 - Release policies and processes

Release process for W&B Server

This page gives details about W&B Server releases and W&B’s release policies. This page relates to W&B Dedicated Cloud and Self-Managed deployments. To learn more about an individual W&B Server release, refer to W&B release notes.

W&B fully manages W&B Multi-tenant Cloud and the details in this page do not apply.

Release support and end of life policy

W&B supports a major W&B Server release for 6 months from its initial release date.

  • Dedicated Cloud instances are automatically updated to maintain support.

  • Customers with Self-managed instances are responsible for upgrading in time to maintain support. Avoid staying on an unsupported version.

Release types and frequencies

  • Major releases are produced monthly, and may include new features, enhancements, performance improvements, medium and low severity bug fixes, and deprecations. An example of a major release is 0.68.0.
  • Patch releases within a major version are produced as needed, and include critical and high severity bug fixes. An example of a patch release is 0.67.1.

Release rollout

  1. After testing and validation are complete, a release is first rolled out to all Dedicated Cloud instances to keep them fully updated.
  2. After additional observation, the release is published, and Self-managed deployments can upgrade to it on their own schedule, and are responsible for upgrading in time to comply with the Release support and End of Life (EOL) policy. Learn more about upgrading W&B Server.

Downtime during upgrades

  • When a Dedicated Cloud instance is upgraded, downtime is generally not expected, but may occur in certain situations:
    • If a new feature or enhancement requires changes to the underlying infrastructure, such as compute, storage or network.
    • To roll out a critical infrastructure change such as a security fix.
    • If the instance’s current version has reached its End of Life (EOL) and is upgraded by W&B to maintain support.
  • For Self-managed deployments, the customer is responsible for implementing a rolling update process that meets their service level objectives (SLOs), such as by running W&B Server on Kubernetes.

Feature availability

After installing or upgrading, certain features may not be immediately available.

Enterprise features

An Enterprise license includes support for important security capabilities and other enterprise-friendly functionality. Some advanced features require an Enterprise license.

  • Dedicated Cloud includes an Enterprise license and no action is required.
  • On Self-managed deployments, features that require an Enterprise license are not available until it is set. To learn more or obtain an Enterprise license, refer to Obtain your W&B Server license.

Private preview and opt-in features

Most features are available immediately after installing or upgrading W&B Server. The W&B team must enable certain features before you can use them in your instance.

  • Private preview: W&B invites design partners and early adopters to test these features and provide feedback. Private preview features are not recommended for production environments.

    The W&B team must enable a private preview feature for your instance before you can use it. Public documentation is not available; instructions are provided directly. Interfaces and APIs may change, and the feature may not be fully implemented.

  • Public preview: Contact W&B to opt in to a public preview to try it out before it is generally available.

    The W&B team must enable a public preview feature before you can use it in your instance. Documentation may not be complete, interfaces and APIs may change, and the feature may not be fully implemented.

To learn more about an individual W&B Server release, including any limitations, refer to W&B Release notes.

2 - Command Line Interface

Usage

wandb [OPTIONS] COMMAND [ARGS]...

Options

Option Description
--version Show the version and exit.

Commands

Command Description
agent Run the W&B agent
artifact Commands for interacting with artifacts
beta Beta versions of wandb CLI commands.
controller Run the W&B local sweep controller
disabled Disable W&B.
docker Run your code in a docker container.
docker-run Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER…
enabled Enable W&B.
init Configure a directory with Weights & Biases
job Commands for managing and viewing W&B jobs
launch Launch or queue a W&B Job.
launch-agent Run a W&B launch agent.
launch-sweep Run a W&B launch sweep (Experimental).
login Login to Weights & Biases
offline Disable W&B sync
online Enable W&B sync
pull Pull files from Weights & Biases
restore Restore code, config and docker state for a run
scheduler Run a W&B launch sweep scheduler (Experimental)
server Commands for operating a local W&B server
status Show configuration settings
sweep Initialize a hyperparameter sweep.
sync Upload an offline training directory to W&B
verify Verify your local instance

2.1 - wandb agent

Usage

wandb agent [OPTIONS] SWEEP_ID

Summary

Run the W&B agent

Options

Option Description
-p, --project The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.
-e, --entity The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
--count The max number of runs for this agent.

2.2 - wandb artifact

Usage

wandb artifact [OPTIONS] COMMAND [ARGS]...

Summary

Commands for interacting with artifacts

Options

Option Description

Commands

Command Description
cache Commands for interacting with the artifact cache
get Download an artifact from wandb
ls List all artifacts in a wandb project
put Upload an artifact to wandb

2.2.1 - wandb artifact cache

Usage

wandb artifact cache [OPTIONS] COMMAND [ARGS]...

Summary

Commands for interacting with the artifact cache

Options

Option Description

Commands

Command Description
cleanup Clean up less frequently used files from the artifacts cache

2.2.1.1 - wandb artifact cache cleanup

Usage

wandb artifact cache cleanup [OPTIONS] TARGET_SIZE

Summary

Clean up less frequently used files from the artifacts cache

Options

Option Description
--remove-temp / --no-remove-temp Remove temp files

2.2.2 - wandb artifact get

Usage

wandb artifact get [OPTIONS] PATH

Summary

Download an artifact from wandb

Options

Option Description
--root The directory you want to download the artifact to
--type The type of artifact you are downloading

2.2.3 - wandb artifact ls

Usage

wandb artifact ls [OPTIONS] PATH

Summary

List all artifacts in a wandb project

Options

Option Description
-t, --type The type of artifacts to list

2.2.4 - wandb artifact put

Usage

wandb artifact put [OPTIONS] PATH

Summary

Upload an artifact to wandb

Options

Option Description
-n, --name The name of the artifact to push: project/artifact_name
-d, --description A description of this artifact
-t, --type The type of the artifact
-a, --alias An alias to apply to this artifact
--id The run you want to upload to.
--resume Resume the last run from your current directory.
--skip_cache Skip caching while uploading artifact files.
`–policy [mutable immutable]`

2.3 - wandb beta

Usage

wandb beta [OPTIONS] COMMAND [ARGS]...

Summary

Beta versions of wandb CLI commands. Requires wandb-core.

Options

Option Description

Commands

Command Description
sync Upload a training run to W&B

2.3.1 - wandb beta sync

Usage

wandb beta sync [OPTIONS] WANDB_DIR

Summary

Upload a training run to W&B

Options

Option Description
--id The run you want to upload to.
-p, --project The project you want to upload to.
-e, --entity The entity to scope to.
--skip-console Skip console logs
--append Append run
-i, --include Glob to include. Can be used multiple times.
-e, --exclude Glob to exclude. Can be used multiple times.
--mark-synced / --no-mark-synced Mark runs as synced
--skip-synced / --no-skip-synced Skip synced runs
--dry-run Perform a dry run without uploading anything.

2.4 - wandb controller

Usage

wandb controller [OPTIONS] SWEEP_ID

Summary

Run the W&B local sweep controller

Options

Option Description
--verbose Display verbose output

2.5 - wandb disabled

Usage

wandb disabled [OPTIONS]

Summary

Disable W&B.

Options

Option Description
--service Disable W&B service [default: True]

2.6 - wandb docker

Usage

wandb docker [OPTIONS] [DOCKER_RUN_ARGS]... [DOCKER_IMAGE]

Summary

Run your code in a docker container.

W&B docker lets you run your code in a docker image ensuring wandb is configured. It adds the WANDB_DOCKER and WANDB_API_KEY environment variables to your container and mounts the current directory in /app by default. You can pass additional args which will be added to docker run before the image name is declared, we’ll choose a default image for you if one isn’t passed:

images-public/tensorflow-1.12.0-notebook-cpu:v0.4.0 --jupyter wandb docker
wandb/deepo:keras-gpu --no-tty --cmd "python train.py --epochs=5" ```

By default, we override the entrypoint to check for the existence of wandb
and install it if not present.  If you pass the --jupyter flag we will
ensure jupyter is installed and start jupyter lab on port 8888.  If we
detect nvidia-docker on your system we will use the nvidia runtime.  If you
just want wandb to set environment variable to an existing docker run
command, see the wandb docker-run command.


**Options**

| **Option** | **Description** |
| :--- | :--- |
| `--nvidia / --no-nvidia` | Use the nvidia runtime, defaults to nvidia if   nvidia-docker is present |
| `--digest` | Output the image digest and exit |
| `--jupyter / --no-jupyter` | Run jupyter lab in the container |
| `--dir` | Which directory to mount the code in the container |
| `--no-dir` | Don't mount the current directory |
| `--shell` | The shell to start the container with |
| `--port` | The host port to bind jupyter on |
| `--cmd` | The command to run in the container |
| `--no-tty` | Run the command without a tty |

2.7 - wandb docker-run

Usage

wandb docker-run [OPTIONS] [DOCKER_RUN_ARGS]...

Summary

Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER environment variables.

This will also set the runtime to nvidia if the nvidia-docker executable is present on the system and –runtime wasn’t set.

See docker run --help for more details.

Options

Option Description

2.8 - wandb enabled

Usage

wandb enabled [OPTIONS]

Summary

Enable W&B.

Options

Option Description
--service Enable W&B service [default: True]

2.9 - wandb init

Usage

wandb init [OPTIONS]

Summary

Configure a directory with Weights & Biases

Options

Option Description
-p, --project The project to use.
-e, --entity The entity to scope the project to.
--reset Reset settings
-m, --mode Can be “online”, “offline” or “disabled”. Defaults to online.

2.10 - wandb job

Usage

wandb job [OPTIONS] COMMAND [ARGS]...

Summary

Commands for managing and viewing W&B jobs

Options

Option Description

Commands

Command Description
create Create a job from a source, without a wandb run.
describe Describe a launch job.
list List jobs in a project

2.10.1 - wandb job create

Usage

wandb job create [OPTIONS] {git|code|image} PATH

Summary

Create a job from a source, without a wandb run.

Jobs can be of three types, git, code, or image.

git: A git source, with an entrypoint either in the path or provided explicitly pointing to the main python executable. code: A code path, containing a requirements.txt file. image: A docker image.

Options

Option Description
-p, --project The project you want to list jobs from.
-e, --entity The entity the jobs belong to
-n, --name Name for the job
-d, --description Description for the job
-a, --alias Alias for the job
--entry-point Entrypoint to the script, including an executable and an entrypoint file. Required for code or repo jobs. If –build-context is provided, paths in the entrypoint command will be relative to the build context.
-g, --git-hash Commit reference to use as the source for git jobs
-r, --runtime Python runtime to execute the job
-b, --build-context Path to the build context from the root of the job source code. If provided, this is used as the base path for the Dockerfile and entrypoint.
--base-image Base image to use for the job. Incompatible with image jobs.
--dockerfile Path to the Dockerfile for the job. If –build- context is provided, the Dockerfile path will be relative to the build context.

2.10.2 - wandb job describe

Usage

wandb job describe [OPTIONS] JOB

Summary

Describe a launch job. Provide the launch job in the form of: entity/project/job-name:alias-or-version

Options

Option Description

2.10.3 - wandb job list

Usage

wandb job list [OPTIONS]

Summary

List jobs in a project

Options

Option Description
-p, --project The project you want to list jobs from.
-e, --entity The entity the jobs belong to

2.11 - wandb launch

Usage

wandb launch [OPTIONS]

Summary

Launch or queue a W&B Job. See https://wandb.me/launch

Options

Option Description
-u, --uri (str) Local path or git repo uri to launch. If provided this command will create a job from the specified uri.
-j, --job (str) Name of the job to launch. If passed in, launch does not require a uri.
--entry-point Entry point within project. [default: main]. If the entry point is not found, attempts to run the project file with the specified name as a script, using ‘python’ to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files. If passed in, will override the entrypoint value passed in using a config file.
--build-context (str) Path to the build context within the source code. Defaults to the root of the source code. Compatible only with -u.
--name Name of the run under which to launch the run. If not specified, a random run name will be used to launch run. If passed in, will override the name passed in using a config file.
-e, --entity (str) Name of the target entity which the new run will be sent to. Defaults to using the entity set by local wandb/settings folder. If passed in, will override the entity value passed in using a config file.
-p, --project (str) Name of the target project which the new run will be sent to. Defaults to using the project name given by the source uri or for github runs, the git repo name. If passed in, will override the project value passed in using a config file.
-r, --resource Execution resource to use for run. Supported values: ’local-process’, ’local-container’, ‘kubernetes’, ‘sagemaker’, ‘gcp-vertex’. This is now a required parameter if pushing to a queue with no resource configuration. If passed in, will override the resource value passed in using a config file.
-d, --docker-image Specific docker image you’d like to use. In the form name:tag. If passed in, will override the docker image value passed in using a config file.
--base-image Docker image to run job code in. Incompatible with –docker-image.
-c, --config Path to JSON file (must end in ‘.json’) or JSON string which will be passed as a launch config. Dictation how the launched run will be configured.
-v, --set-var Set template variable values for queues with allow listing enabled, as key-value pairs e.g. --set-var key1=value1 --set-var key2=value2
-q, --queue Name of run queue to push to. If none, launches single run directly. If supplied without an argument (--queue), defaults to queue ‘default’. Else, if name supplied, specified run queue must exist under the project and entity supplied.
--async Flag to run the job asynchronously. Defaults to false, i.e. unless –async is set, wandb launch will wait for the job to finish. This option is incompatible with –queue; asynchronous options when running with an agent should be set on wandb launch-agent.
--resource-args Path to JSON file (must end in ‘.json’) or JSON string which will be passed as resource args to the compute resource. The exact content which should be provided is different for each execution backend. See documentation for layout of this file.
--dockerfile Path to the Dockerfile used to build the job, relative to the job’s root
`–priority [critical high

2.12 - wandb launch-agent

Usage

wandb launch-agent [OPTIONS]

Summary

Run a W&B launch agent.

Options

Option Description
-q, --queue The name of a queue for the agent to watch. Multiple -q flags supported.
-e, --entity The entity to use. Defaults to current logged-in user
-l, --log-file Destination for internal agent logs. Use - for stdout. By default all agents logs will go to debug.log in your wandb/ subdirectory or WANDB_DIR if set.
-j, --max-jobs The maximum number of launch jobs this agent can run in parallel. Defaults to 1. Set to -1 for no upper limit
-c, --config path to the agent config yaml to use
-v, --verbose Display verbose output

2.13 - wandb launch-sweep

Usage

wandb launch-sweep [OPTIONS] [CONFIG]

Summary

Run a W&B launch sweep (Experimental).

Options

Option Description
-q, --queue The name of a queue to push the sweep to
-p, --project Name of the project which the agent will watch. If passed in, will override the project value passed in using a config file
-e, --entity The entity to use. Defaults to current logged-in user
-r, --resume_id Resume a launch sweep by passing an 8-char sweep id. Queue required
--prior_run ID of an existing run to add to this sweep

2.14 - wandb login

Usage

wandb login [OPTIONS] [KEY]...

Summary

Login to Weights & Biases

Options

Option Description
--cloud Login to the cloud instead of local
--host, --base-url Login to a specific instance of W&B
--relogin Force relogin if already logged in.
--anonymously Log in anonymously
--verify / --no-verify Verify login credentials

2.15 - wandb offline

Usage

wandb offline [OPTIONS]

Summary

Disable W&B sync

Options

Option Description

2.16 - wandb online

Usage

wandb online [OPTIONS]

Summary

Enable W&B sync

Options

Option Description

2.17 - wandb pull

Usage

wandb pull [OPTIONS] RUN

Summary

Pull files from Weights & Biases

Options

Option Description
-p, --project The project you want to download.
-e, --entity The entity to scope the listing to.

2.18 - wandb restore

Usage

wandb restore [OPTIONS] RUN

Summary

Restore code, config and docker state for a run

Options

Option Description
--no-git Don’t restore git state
--branch / --no-branch Whether to create a branch or checkout detached
-p, --project The project you wish to upload to.
-e, --entity The entity to scope the listing to.

2.19 - wandb scheduler

Usage

wandb scheduler [OPTIONS] SWEEP_ID

Summary

Run a W&B launch sweep scheduler (Experimental)

Options

Option Description

2.20 - wandb server

Usage

wandb server [OPTIONS] COMMAND [ARGS]...

Summary

Commands for operating a local W&B server

Options

Option Description

Commands

Command Description
start Start a local W&B server
stop Stop a local W&B server

2.20.1 - wandb server start

Usage

wandb server start [OPTIONS]

Summary

Start a local W&B server

Options

Option Description
-p, --port The host port to bind W&B server on
-e, --env Env vars to pass to wandb/local
--daemon / --no-daemon Run or don’t run in daemon mode

2.20.2 - wandb server stop

Usage

wandb server stop [OPTIONS]

Summary

Stop a local W&B server

Options

Option Description

2.21 - wandb status

Usage

wandb status [OPTIONS]

Summary

Show configuration settings

Options

Option Description
--settings / --no-settings Show the current settings

2.22 - wandb sweep

Usage

wandb sweep [OPTIONS] CONFIG_YAML_OR_SWEEP_ID

Summary

Initialize a hyperparameter sweep. Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.

Options

Option Description
-p, --project The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled Uncategorized.
-e, --entity The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
--controller Run local controller
--verbose Display verbose output
--name The name of the sweep. The sweep ID is used if no name is specified.
--program Set sweep program
--update Update pending sweep
--stop Finish a sweep to stop running new runs and let currently running runs finish.
--cancel Cancel a sweep to kill all running runs and stop running new runs.
--pause Pause a sweep to temporarily stop running new runs.
--resume Resume a sweep to continue running new runs.
--prior_run ID of an existing run to add to this sweep

2.23 - wandb sync

Usage

wandb sync [OPTIONS] [PATH]...

Summary

Upload an offline training directory to W&B

Options

Option Description
--id The run you want to upload to.
-p, --project The project you want to upload to.
-e, --entity The entity to scope to.
--job_type Specifies the type of run for grouping related runs together.
--sync-tensorboard / --no-sync-tensorboard Stream tfevent files to wandb.
--include-globs Comma separated list of globs to include.
--exclude-globs Comma separated list of globs to exclude.
--include-online / --no-include-online Include online runs
--include-offline / --no-include-offline Include offline runs
--include-synced / --no-include-synced Include synced runs
--mark-synced / --no-mark-synced Mark runs as synced
--sync-all Sync all runs
--clean Delete synced runs
--clean-old-hours Delete runs created before this many hours. To be used alongside –clean flag.
--clean-force Clean without confirmation prompt.
--show Number of runs to show
--append Append run
--skip-console Skip console logs

2.24 - wandb verify

Usage

wandb verify [OPTIONS]

Summary

Verify your local instance

Options

Option Description
--host Test a specific instance of W&B

3 - JavaScript Library

The W&B SDK for TypeScript, Node, and modern Web Browsers

Similar to our Python library, we offer a client to track experiments in JavaScript/TypeScript.

  • Log metrics from your Node server and display them in interactive plots on W&B
  • Debug LLM applications with interactive traces
  • Debug LangChain.js usage

This library is compatible with Node and modern JS run times.

You can find the source code for the JavaScript client in the Github repository.

Installation

npm install @wandb/sdk
# or ...
yarn add @wandb/sdk

Usage

TypeScript/ESM:

import wandb from '@wandb/sdk'

async function track() {
    await wandb.init({config: {test: 1}});
    wandb.log({acc: 0.9, loss: 0.1});
    wandb.log({acc: 0.91, loss: 0.09});
    await wandb.finish();
}

await track()

Node/CommonJS:

const wandb = require('@wandb/sdk').default;

We’re currently missing a lot of the functionality found in our Python SDK, but basic logging functionality is available. We’ll be adding additional features like Tables soon.

Authentication and Settings

In node environments we look for process.env.WANDB_API_KEY and prompt for it’s input if we have a TTY. In non-node environments we look for sessionStorage.getItem("WANDB_API_KEY"). Additional settings can be found here.

Integrations

Our Python integrations are widely used by our community, and we hope to build out more JavaScript integrations to help LLM app builders leverage whatever tool they want.

If you have any requests for additional integrations, we’d love you to open an issue with details about the request.

LangChain.js

This library integrates with the popular library for building LLM applications, LangChain.js version >= 0.0.75.

import {WandbTracer} from '@wandb/sdk/integrations/langchain';

const wbTracer = await WandbTracer.init({project: 'langchain-test'});
// run your langchain workloads...
chain.call({input: "My prompt"}, wbTracer)
await WandbTracer.finish();

See this test for a more detailed example.

4 - Python Library

Use wandb to track machine learning work.

Train and fine-tune models, manage models from experimentation to production.

For guides and examples, see https://docs.wandb.ai.

For scripts and interactive notebooks, see https://github.com/wandb/examples.

For reference documentation, see https://docs.wandb.com/ref/python.

Classes

class Artifact: Flexible and lightweight building block for dataset and model versioning.

class Run: A unit of computation logged by wandb. Typically, this is an ML experiment.

Functions

agent(...): Start one or more sweep agents.

controller(...): Public sweep controller constructor.

finish(...): Finish a run and upload any remaining data.

init(...): Start a new run to track and log to W&B.

log(...): Upload run data.

login(...): Set up W&B login credentials.

save(...): Sync one or more files to W&B.

sweep(...): Initialize a hyperparameter sweep.

watch(...): Hooks into the given PyTorch model(s) to monitor gradients and the model’s computational graph.

Other Members
__version__ '0.19.11'
config
summary

4.1 - API Walkthrough

Learn when and how to use different W&B APIs to track, share, and manage model artifacts in your machine learning workflows. This page covers logging experiments, generating reports, and accessing logged data using the appropriate W&B API for each task.

W&B offers the following APIs:

  • W&B Python SDK (wandb.sdk): Log and monitor experiments during training.
  • W&B Public API (wandb.apis.public): Query and analyze logged experiment data.
  • W&B Reports API (wandb.wandb-workspaces): Create reports to summarize findings.

Sign up and create an API key

To authenticate your machine with W&B, you must first generate an API key at wandb.ai/authorize. Copy the API key and store it securely.

Install and import packages

Install the W&B library and some other packages you will need for this walkthrough.

pip install wandb

Import W&B Python SDK:

import wandb

Specify the entity of your team in the following code block:

TEAM_ENTITY = "<Team_Entity>" # Replace with your team entity
PROJECT = "my-awesome-project"

Train model

The following code simulates a basic machine learning workflow: training a model, logging metrics, and saving the model as an artifact.

Use the W&B Python SDK (wandb.sdk) to interact with W&B during training. Log the loss using wandb.log, then save the trained model as an artifact using wandb.Artifact before finally adding the model file using Artifact.add_file.

import random # For simulating data

def model(training_data: int) -> int:
    """Model simulation for demonstration purposes."""
    return training_data * 2 + random.randint(-1, 1)  

# Simulate weights and noise
weights = random.random() # Initialize random weights
noise = random.random() / 5  # Small random noise to simulate noise

# Hyperparameters and configuration
config = {
    "epochs": 10,  # Number of epochs to train
    "learning_rate": 0.01,  # Learning rate for the optimizer
}

# Use context manager to initialize and close W&B runs
with wandb.init(project=PROJECT, entity=TEAM_ENTITY, config=config) as run:    
    # Simulate training loop
    for epoch in range(config["epochs"]):
        xb = weights + noise  # Simulated input training data
        yb = weights + noise * 2  # Simulated target output (double the input noise)
        
        y_pred = model(xb)  # Model prediction
        loss = (yb - y_pred) ** 2  # Mean Squared Error loss

        print(f"epoch={epoch}, loss={y_pred}")
        # Log epoch and loss to W&B
        run.log({
            "epoch": epoch,
            "loss": loss,
        })

    # Unique name for the model artifact,
    model_artifact_name = f"model-demo"  

    # Local path to save the simulated model file
    PATH = "model.txt" 

    # Save model locally
    with open(PATH, "w") as f:
        f.write(str(weights)) # Saving model weights to a file

    # Create an artifact object
    # Add locally saved model to artifact object
    artifact = wandb.Artifact(name=model_artifact_name, type="model", description="My trained model")
    artifact.add_file(local_path=PATH)
    artifact.save()

The key takeaways from the previous code block are:

  • Use wandb.log to log metrics during training.
  • Use wandb.Artifact to save models (datasets, and so forth) as an artifact to your W&B project.

Now that you have trained a model and saved it as an artifact, you can publish it to a registry in W&B. Use wandb.use_artifact to retrieve the artifact from your project and prepare it for publication in the Model registry. wandb.use_artifact serves two key purposes:

  • Retrieves the artifact object from your project.
  • Marks the artifact as an input to the run, ensuring reproducibility and traceability. See Create and view lineage map for details.

Publish the model to the Model registry

To share the model with others in your organization, publish it to a collection using wandb.link_artifact. The following code links the artifact to the core Model registry, making it accessible to your team.

# Artifact name specifies the specific artifact version within our team's project
artifact_name = f'{TEAM_ENTITY}/{PROJECT}/{model_artifact_name}:v0'
print("Artifact name: ", artifact_name)

REGISTRY_NAME = "Model" # Name of the registry in W&B
COLLECTION_NAME = "DemoModels"  # Name of the collection in the registry

# Create a target path for our artifact in the registry
target_path = f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}"
print("Target path: ", target_path)

run = wandb.init(entity=TEAM_ENTITY, project=PROJECT)
model_artifact = run.use_artifact(artifact_or_name=artifact_name, type="model")
run.link_artifact(artifact=model_artifact, target_path=target_path)
run.finish()

After running link_artifact(), the model artifact will be in the DemoModels collection in your registry. From there, you can view details such as the version history, lineage map, and other metadata.

For additional information on how to link artifacts to a registry, see Link artifacts to a registry.

Retrieve model artifact from registry for inference

To use a model for inference, use use_artifact() to retrieve the published artifact from the registry. This returns an artifact object that you can then use download() to download the artifact to a local file.

REGISTRY_NAME = "Model"  # Name of the registry in W&B
COLLECTION_NAME = "DemoModels"  # Name of the collection in the registry
VERSION = 0 # Version of the artifact to retrieve

model_artifact_name = f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"
print(f"Model artifact name: {model_artifact_name}")

run = wandb.init(entity=TEAM_ENTITY, project=PROJECT)
registry_model = run.use_artifact(artifact_or_name=model_artifact_name)
local_model_path = registry_model.download()

For more information on how to retrieve artifacts from a registry, see Download an artifact from a registry.

Depending on your machine learning framework, you may need to recreate the model architecture before loading the weights. This is left as an exercise for the reader, as it depends on the specific framework and model you are using.

Share your finds with a report

Create and share a report to summarize your work. To create a report programmatically, use the W&B Reports API.

First, install the W&B Reports API:

pip install wandb wandb-workspaces -qqq

The following code block creates a report with multiple blocks, including markdown, panel grids, and more. You can customize the report by adding more blocks or changing the content of existing blocks.

The output of the code block prints a link to the URL report created. You can open this link in your browser to view the report.

import wandb_workspaces.reports.v2 as wr

experiment_summary = """This is a summary of the experiment conducted to train a simple model using W&B."""
dataset_info = """The dataset used for training consists of synthetic data generated by a simple model."""
model_info = """The model is a simple linear regression model that predicts output based on input data with some noise."""

report = wr.Report(
    project=PROJECT,
    entity=TEAM_ENTITY,
    title="My Awesome Model Training Report",
    description=experiment_summary,
    blocks= [
        wr.TableOfContents(),
        wr.H2("Experiment Summary"),
        wr.MarkdownBlock(text=experiment_summary),
        wr.H2("Dataset Information"),
        wr.MarkdownBlock(text=dataset_info),
        wr.H2("Model Information"),
        wr.MarkdownBlock(text = model_info),
        wr.PanelGrid(
            panels=[
                wr.LinePlot(title="Train Loss", x="Step", y=["loss"], title_x="Step", title_y="Loss")
                ],
            ),  
    ]

)

# Save the report to W&B
report.save()

For more information on how to create a report programmatically or how to create a report interactively with the W&B App, see Create a report in the W&B Docs Developer guide.

Query the registry

Use the W&B Public APIs to query, analyze, and manage historical data from W&B. This can be useful for tracking the lineage of artifacts, comparing different versions, and analyzing the performance of models over time.

The following code block demonstrates how to query the Model registry for all artifacts in a specific collection. It retrieves the collection and iterates through its versions, printing out the name and version of each artifact.

import wandb

# Initialize wandb API
api = wandb.Api()

# Find all artifact versions that contains the string `model` and 
# has either the tag `text-classification` or an `latest` alias
registry_filters = {
    "name": {"$regex": "model"}
}

# Use logical $or operator to filter artifact versions
version_filters = {
    "$or": [
        {"tag": "text-classification"},
        {"alias": "latest"}
    ]
}

# Returns an iterable of all artifact versions that match the filters
artifacts = api.registries(filter=registry_filters).collections().versions(filter=version_filters)

# Print out the name, collection, aliases, tags, and created_at date of each artifact found
for art in artifacts:
    print(f"artifact name: {art.name}")
    print(f"collection artifact belongs to: { art.collection.name}")
    print(f"artifact aliases: {art.aliases}")
    print(f"tags attached to artifact: {art.tags}")
    print(f"artifact created at: {art.created_at}\n")

For more information on querying the registry, see the Query registry items with MongoDB-style queries.

4.2 - agent

Start one or more sweep agents.

agent(
    sweep_id: str,
    function: Optional[Callable] = None,
    entity: Optional[str] = None,
    project: Optional[str] = None,
    count: Optional[int] = None
) -> None

The sweep agent uses the sweep_id to know which sweep it is a part of, what function to execute, and (optionally) how many agents to run.

Args
sweep_id The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK.
function A function to call instead of the “program” specified in the sweep config.
entity The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
project The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled “Uncategorized”.
count The number of sweep config trials to try.

4.3 - Artifact

Flexible and lightweight building block for dataset and model versioning.

Artifact(
    name: str,
    type: str,
    description: (str | None) = None,
    metadata: (dict[str, Any] | None) = None,
    incremental: bool = (False),
    use_as: (str | None) = None
) -> None

Construct an empty W&B Artifact. Populate an artifacts contents with methods that begin with add. Once the artifact has all the desired files, you can call wandb.log_artifact() to log it.

Args
name A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with the use_artifact Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project.
type The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types include dataset or model. Include model within your type string if you want to link the artifact to the W&B Model Registry.
description A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with the Artifact.description attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App.
metadata Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys.
incremental Use Artifact.new_draft() method instead to modify an existing artifact.
use_as W&B Launch specific parameter. Not recommended for general use.
is_link Boolean indication of if the artifact is a linked artifact(True) or source artifact(False).
Returns
An Artifact object.
Attributes
aliases List of one or more semantically-friendly references or identifying “nicknames” assigned to an artifact version. Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.
collection The collection this artifact was retrieved from. A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.
commit_hash The hash returned when this artifact was committed.
created_at Timestamp when the artifact was created.
description A description of the artifact.
digest The logical digest of the artifact. The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest version, then log_artifact is a no-op.
entity The name of the entity that the artifact collection belongs to. If the artifact is a link, the entity will be the entity of the linked artifact.
file_count The number of files (including references).
history_step The nearest step at which history metrics were logged for the source run of the artifact.
id The artifact’s ID.
is_link Boolean flag indicating if the artifact is a link artifact. True: The artifact is a link artifact to a source artifact. False: The artifact is a source artifact.
linked_artifacts Returns a list of all the linked artifacts of a source artifact. If the artifact is a link artifact (artifact.is_link == True), it will return an empty list. Limited to 500 results.
manifest The artifact’s manifest. The manifest lists all of its contents, and can’t be changed once the artifact has been logged.
metadata User-defined artifact metadata. Structured data associated with the artifact.
name The artifact name and version of the artifact. A string with the format {collection}:{alias}. If fetched before an artifact is logged/saved, the name won’t contain the alias. If the artifact is a link, the name will be the name of the linked artifact.
project The name of the project that the artifact collection belongs to. If the artifact is a link, the project will be the project of the linked artifact.
qualified_name The entity/project/name of the artifact. If the artifact is a link, the qualified name will be the qualified name of the linked artifact path.
size The total size of the artifact in bytes. Includes any references tracked by this artifact.
source_artifact Returns the source artifact. The source artifact is the original logged artifact. If the artifact itself is a source artifact (artifact.is_link == False), it will return itself.
source_collection The artifact’s source collection. The source collection is the collection that the artifact was logged from.
source_entity The name of the entity of the source artifact.
source_name The artifact name and version of the source artifact. A string with the format {source_collection}:{alias}. Before the artifact is saved, contains only the name since the version is not yet known.
source_project The name of the project of the source artifact.
source_qualified_name The source_entity/source_project/source_name of the source artifact.
source_version The source artifact’s version. A string with the format v{number}.
state The status of the artifact. One of: “PENDING”, “COMMITTED”, or “DELETED”.
tags List of one or more tags assigned to this artifact version.
ttl The time-to-live (TTL) policy of an artifact. Artifacts are deleted shortly after a TTL policy’s duration passes. If set to None, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact.
type The artifact’s type. Common types include dataset or model.
updated_at The time when the artifact was last updated.
url Constructs the URL of the artifact.
version The artifact’s version. A string with the format v{number}. If the artifact is a link artifact, the version will be from the linked collection.

Methods

add

View source

add(
    obj: WBValue,
    name: StrPath,
    overwrite: bool = (False)
) -> ArtifactManifestEntry

Add wandb.WBValue obj to the artifact.

Args
obj The object to add. Currently support one of Bokeh, JoinedTable, PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D, Audio, Image, Video, Html, Object3D
name The path within the artifact to add the object.
overwrite If True, overwrite existing objects with the same file path (if applicable).
Returns
The added manifest entry
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

add_dir

View source

add_dir(
    local_path: str,
    name: (str | None) = None,
    skip_cache: (bool | None) = (False),
    policy: (Literal['mutable', 'immutable'] | None) = "mutable"
) -> None

Add a local directory to the artifact.

Args
local_path The path of the local directory.
name The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’s type. Defaults to the root of the artifact.
skip_cache If set to True, W&B will not copy/move files to the cache while uploading
policy “mutable”
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
ValueError Policy must be “mutable” or “immutable”

add_file

View source

add_file(
    local_path: str,
    name: (str | None) = None,
    is_tmp: (bool | None) = (False),
    skip_cache: (bool | None) = (False),
    policy: (Literal['mutable', 'immutable'] | None) = "mutable",
    overwrite: bool = (False)
) -> ArtifactManifestEntry

Add a local file to the artifact.

Args
local_path The path to the file being added.
name The path within the artifact to use for the file being added. Defaults to the basename of the file.
is_tmp If true, then the file is renamed deterministically to avoid collisions.
skip_cache If True, W&B will not copy files to the cache after uploading.
policy By default, set to “mutable”. If set to “mutable”, create a temporary copy of the file to prevent corruption during upload. If set to “immutable”, disable protection and rely on the user not to delete or change the file.
overwrite If True, overwrite the file if it already exists.
Returns
The added manifest entry.
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
ValueError Policy must be “mutable” or “immutable”

add_reference

View source

add_reference(
    uri: (ArtifactManifestEntry | str),
    name: (StrPath | None) = None,
    checksum: bool = (True),
    max_objects: (int | None) = None
) -> Sequence[ArtifactManifestEntry]

Add a reference denoted by a URI to the artifact.

Unlike files or directories that you add to an artifact, references are not uploaded to W&B. For more information, see Track external files.

By default, the following schemes are supported:

  • http(s): The size and digest of the file will be inferred by the Content-Length and the ETag response headers returned by the server.
  • s3: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
  • gs: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
  • https, domain matching *.blob.core.windows.net (Azure): The checksum and size are be pulled from the blob metadata. If storage account versioning is enabled, then the version ID is also tracked.
  • file: The checksum and size are pulled from the file system. This scheme is useful if you have an NFS share or other externally mounted volume containing files you wish to track but not necessarily upload.

For any other scheme, the digest is just a hash of the URI and the size is left blank.

Args
uri The URI path of the reference to add. The URI path can be an object returned from Artifact.get_entry to store a reference to another artifact’s entry.
name The path within the artifact to place the contents of this reference.
checksum Whether or not to checksum the resource(s) located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend setting checksum=False when adding reference objects, in which case a new version will only be created if the reference URI changes.
max_objects The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum.
Returns
The added manifest entries.
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

checkout

View source

checkout(
    root: (str | None) = None
) -> str

Replace the specified root directory with the contents of the artifact.

WARNING: This will delete all files in root that are not included in the artifact.

Args
root The directory to replace with this artifact’s files.
Returns
The path of the checked out contents.
Raises
ArtifactNotLoggedError If the artifact is not logged.

delete

View source

delete(
    delete_aliases: bool = (False)
) -> None

Delete an artifact and its files.

If called on a linked artifact (i.e. a member of a portfolio collection): only the link is deleted, and the source artifact is unaffected.

Use artifact.unlink() instead of artifact.delete() to remove a link between a source artifact and a linked artifact.

Args
delete_aliases If set to True, deletes all aliases associated with the artifact. Otherwise, this raises an exception if the artifact has existing aliases. This parameter is ignored if the artifact is linked (i.e. a member of a portfolio collection).
Raises
ArtifactNotLoggedError If the artifact is not logged.

download

View source

download(
    root: (StrPath | None) = None,
    allow_missing_references: bool = (False),
    skip_cache: (bool | None) = None,
    path_prefix: (StrPath | None) = None,
    multipart: (bool | None) = None
) -> FilePathStr

Download the contents of the artifact to the specified root directory.

Existing files located within root are not modified. Explicitly delete root before you call download if you want the contents of root to exactly match the artifact.

Args
root The directory W&B stores the artifact’s files.
allow_missing_references If set to True, any invalid reference paths will be ignored while downloading referenced files.
skip_cache If set to True, the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory.
path_prefix If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes).
multipart If set to None (default), the artifact will be downloaded in parallel using multipart download if individual file size is greater than 2GB. If set to True or False, the artifact will be downloaded in parallel or serially regardless of the file size.
Returns
The path to the downloaded contents.
Raises
ArtifactNotLoggedError If the artifact is not logged.

file

View source

file(
    root: (str | None) = None
) -> StrPath

Download a single file artifact to the directory you specify with root.

Args
root The root directory to store the file. Defaults to ‘./artifacts/self.name/’.
Returns
The full path of the downloaded file.
Raises
ArtifactNotLoggedError If the artifact is not logged.
ValueError If the artifact contains more than one file.

files

View source

files(
    names: (list[str] | None) = None,
    per_page: int = 50
) -> ArtifactFiles

Iterate over all files stored in this artifact.

Args
names The filename paths relative to the root of the artifact you wish to list.
per_page The number of files to return per request.
Returns
An iterator containing File objects.
Raises
ArtifactNotLoggedError If the artifact is not logged.

finalize

View source

finalize() -> None

Finalize the artifact version.

You cannot modify an artifact version once it is finalized because the artifact is logged as a specific artifact version. Create a new artifact version to log more data to an artifact. An artifact is automatically finalized when you log the artifact with log_artifact.

get

View source

get(
    name: str
) -> (WBValue | None)

Get the WBValue object located at the artifact relative name.

Args
name The artifact relative name to retrieve.
Returns
W&B object that can be logged with wandb.log() and visualized in the W&B UI.
Raises
ArtifactNotLoggedError if the artifact isn’t logged or the run is offline

get_added_local_path_name

View source

get_added_local_path_name(
    local_path: str
) -> (str | None)

Get the artifact relative name of a file added by a local filesystem path.

Args
local_path The local path to resolve into an artifact relative name.
Returns
The artifact relative name.

get_entry

View source

get_entry(
    name: StrPath
) -> ArtifactManifestEntry

Get the entry with the given name.

Args
name The artifact relative name to get
Returns
A W&B object.
Raises
ArtifactNotLoggedError if the artifact isn’t logged or the run is offline.
KeyError if the artifact doesn’t contain an entry with the given name.

get_path

View source

get_path(
    name: StrPath
) -> ArtifactManifestEntry

Deprecated. Use get_entry(name).

is_draft

View source

is_draft() -> bool

Check if artifact is not saved.

Returns: Boolean. False if artifact is saved. True if artifact is not saved.

json_encode

View source

json_encode() -> dict[str, Any]

Returns the artifact encoded to the JSON format.

Returns
A dict with string keys representing attributes of the artifact.

View source

link(
    target_path: str,
    aliases: (list[str] | None) = None
) -> (Artifact | None)

Link this artifact to a portfolio (a promoted collection of artifacts).

Args
target_path The path to the portfolio inside a project. The target path must adhere to one of the following schemas {portfolio}, {project}/{portfolio} or {entity}/{project}/{portfolio}. To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, set target_path to the following schema {"model-registry"}/{Registered Model Name} or {entity}/{"model-registry"}/{Registered Model Name}.
aliases A list of strings that uniquely identifies the artifact inside the specified portfolio.
Raises
ArtifactNotLoggedError If the artifact is not logged.
Returns
The linked artifact if linking was successful, otherwise None.

logged_by

View source

logged_by() -> (Run | None)

Get the W&B run that originally logged the artifact.

Returns
The name of the W&B run that originally logged the artifact.
Raises
ArtifactNotLoggedError If the artifact is not logged.

new_draft

View source

new_draft() -> Artifact

Create a new draft artifact with the same content as this committed artifact.

Modifying an existing artifact creates a new artifact version known as an “incremental artifact”. The artifact returned can be extended or modified and logged as a new version.

Returns
An Artifact object.
Raises
ArtifactNotLoggedError If the artifact is not logged.

new_file

View source

@contextlib.contextmanager
new_file(
    name: str,
    mode: str = "x",
    encoding: (str | None) = None
) -> Iterator[IO]

Open a new temporary file and add it to the artifact.

Args
name The name of the new file to add to the artifact.
mode The file access mode to use to open the new file.
encoding The encoding used to open the new file.
Returns
A new file object that can be written to. Upon closing, the file will be automatically added to the artifact.
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

remove

View source

remove(
    item: (StrPath | ArtifactManifestEntry)
) -> None

Remove an item from the artifact.

Args
item The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed.
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
FileNotFoundError If the item isn’t found in the artifact.

save

View source

save(
    project: (str | None) = None,
    settings: (wandb.Settings | None) = None
) -> None

Persist any changes made to the artifact.

If currently in a run, that run will log this artifact. If not currently in a run, a run of type “auto” is created to track this artifact.

Args
project A project to use for the artifact in the case that a run is not already in context.
settings A settings object to use when initializing an automatic run. Most commonly used in testing harness.

View source

unlink() -> None

Unlink this artifact if it is currently a member of a portfolio (a promoted collection of artifacts).

Raises
ArtifactNotLoggedError If the artifact is not logged.
ValueError If the artifact is not linked, i.e. it is not a member of a portfolio collection.

used_by

View source

used_by() -> list[Run]

Get a list of the runs that have used this artifact and its linked artifacts.

Returns
A list of Run objects.
Raises
ArtifactNotLoggedError If the artifact is not logged.

verify

View source

verify(
    root: (str | None) = None
) -> None

Verify that the contents of an artifact match the manifest.

All files in the directory are checksummed and the checksums are then cross-referenced against the artifact’s manifest. References are not verified.

Args
root The directory to verify. If None artifact will be downloaded to ‘./artifacts/self.name/’
Raises
ArtifactNotLoggedError If the artifact is not logged.
ValueError If the verification fails.

wait

View source

wait(
    timeout: (int | None) = None
) -> Artifact

If needed, wait for this artifact to finish logging.

Args
timeout The time, in seconds, to wait.
Returns
An Artifact object.

__getitem__

View source

__getitem__(
    name: str
) -> (WBValue | None)

Get the WBValue object located at the artifact relative name.

Args
name The artifact relative name to get.
Returns
W&B object that can be logged with wandb.log() and visualized in the W&B UI.
Raises
ArtifactNotLoggedError If the artifact isn’t logged or the run is offline.

__setitem__

View source

__setitem__(
    name: str,
    item: WBValue
) -> ArtifactManifestEntry

Add item to the artifact at path name.

Args
name The path within the artifact to add the object.
item The object to add.
Returns
The added manifest entry
Raises
ArtifactFinalizedError You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

4.4 - automations

Classes

class Automation: A local instance of a saved W&B automation.

class DoNothing: Defines an automation action that intentionally does nothing.

class MetricChangeFilter: Defines a filter that compares a change in a run metric against a user-defined threshold.

class MetricThresholdFilter: Defines a filter that compares a run metric against a user-defined threshold value.

class NewAutomation: A new automation to be created.

class OnAddArtifactAlias: A new alias is assigned to an artifact.

class OnCreateArtifact: A new artifact is created.

class OnLinkArtifact: A new artifact is linked to a collection.

class OnRunMetric: A run metric satisfies a user-defined condition.

class SendNotification: Defines an automation action that sends a (Slack) notification.

class SendWebhook: Defines an automation action that sends a webhook request.

4.4.1 - Automation

A local instance of a saved W&B automation.

Attributes
name The name of this automation.
description An optional description of this automation.
enabled Whether this automation is enabled. Only enabled automations will trigger.
scope The scope in which the triggering event must occur.
event The event that will trigger this automation.
action The action that will execute when this automation is triggered.

4.4.2 - DoNothing

Defines an automation action that intentionally does nothing.

Attributes
no_op Placeholder field which exists only to satisfy backend schema requirements. There should never be a need to set this field explicitly, as its value is ignored.
action_type The kind of action to be triggered.

4.4.3 - MetricChangeFilter

Defines a filter that compares a change in a run metric against a user-defined threshold.

The change is calculated over “tumbling” windows, i.e. the difference between the current window and the non-overlapping prior window.

Attributes
prior_window Size of the prior window over which the metric is aggregated (ignored if agg is None). If omitted, defaults to the size of the current window.
name Name of the observed metric.
agg Aggregate operation, if any, to apply over the window size.
window Size of the window over which the metric is aggregated (ignored if agg is None).
threshold Threshold value to compare against.

4.4.4 - MetricThresholdFilter

Defines a filter that compares a run metric against a user-defined threshold value.

Attributes
cmp Comparison operator used to compare the metric value (left) vs. the threshold value (right).
name Name of the observed metric.
agg Aggregate operation, if any, to apply over the window size.
window Size of the window over which the metric is aggregated (ignored if agg is None).
threshold Threshold value to compare against.

4.4.5 - NewAutomation

A new automation to be created.

Attributes
name The name of this automation.
description An optional description of this automation.
enabled Whether this automation is enabled. Only enabled automations will trigger.
event The event that will trigger this automation.
action The action that will execute when this automation is triggered.
scope The scope in which the triggering event must occur.

4.4.6 - OnAddArtifactAlias

A new alias is assigned to an artifact.

Attributes
scope The scope of the event.
filter Additional condition(s), if any, that must be met for this event to trigger an automation.

Methods

then

View source

then(
    action: InputAction
) -> NewAutomation

Define a new Automation in which this event triggers the given action.

4.4.7 - OnCreateArtifact

A new artifact is created.

Attributes
scope The scope of the event: only artifact collections are valid scopes for this event.
filter Additional condition(s), if any, that must be met for this event to trigger an automation.

Methods

then

View source

then(
    action: InputAction
) -> NewAutomation

Define a new Automation in which this event triggers the given action.

4.4.8 - OnLinkArtifact

A new artifact is linked to a collection.

Attributes
scope The scope of the event.
filter Additional condition(s), if any, that must be met for this event to trigger an automation.

Methods

then

View source

then(
    action: InputAction
) -> NewAutomation

Define a new Automation in which this event triggers the given action.

4.4.9 - OnRunMetric

A run metric satisfies a user-defined condition.

Attributes
scope The scope of the event: only projects are valid scopes for this event.
filter Run and/or metric condition(s) that must be satisfied for this event to trigger an automation.

Methods

then

View source

then(
    action: InputAction
) -> NewAutomation

Define a new Automation in which this event triggers the given action.

4.4.10 - SendNotification

Defines an automation action that sends a (Slack) notification.

Attributes
title The title of the sent notification.
message The message body of the sent notification.
severity The severity (INFO, WARN, ERROR) of the sent notification.
action_type The kind of action to be triggered.

Methods

from_integration

View source

@classmethod
from_integration(
    integration: SlackIntegration,
    *,
    title: str = "",
    text: str = "",
    level: AlertSeverity = AlertSeverity.INFO
) -> Self

Define a notification action that sends to the given (Slack) integration.

4.4.11 - SendWebhook

Defines an automation action that sends a webhook request.

Attributes
request_payload The payload, possibly with template variables, to send in the webhook request.
action_type The kind of action to be triggered.

Methods

from_integration

View source

@classmethod
from_integration(
    integration: WebhookIntegration,
    *,
    payload: Optional[SerializedToJson[dict[str, Any]]] = None
) -> Self

Define a webhook action that sends to the given (webhook) integration.

4.5 - controller

Public sweep controller constructor.

controller(
    sweep_id_or_config: Optional[Union[str, Dict]] = None,
    entity: Optional[str] = None,
    project: Optional[str] = None
) -> "_WandbController"

Usage:

import wandb

tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)

4.6 - Data Types

This module defines data types for logging rich, interactive visualizations to W&B.

Data types include common media types, like images, audio, and videos, flexible containers for information, like tables and HTML, and more.

For more on logging media, see our guide

For more on logging structured data for interactive dataset and model analysis, see our guide to W&B Tables.

All of these special data types are subclasses of WBValue. All the data types serialize to JSON, since that is what wandb uses to save the objects locally and upload them to the W&B server.

Classes

class Audio: Wandb class for audio clips.

class BoundingBoxes2D: Format images with 2D bounding box overlays for logging to W&B.

class Graph: Wandb class for graphs.

class Histogram: wandb class for histograms.

class Html: A class for logging HTML content to W&B.

class Image: Format images for logging to W&B.

class ImageMask: Format image masks or overlays for logging to W&B.

class Molecule: Wandb class for 3D Molecular data.

class Object3D: Wandb class for 3D point clouds.

class Plotly: Wandb class for plotly plots.

class Table: The Table class used to display and analyze tabular data.

class Video: A class for logging videos to W&B.

class WBTraceTree: Media object for trace tree data.

4.6.1 - Audio

Wandb class for audio clips.

Audio(
    data_or_path, sample_rate=None, caption=None
)
Args
data_or_path (string or numpy array) A path to an audio file or a numpy array of audio data.
sample_rate (int) Sample rate, required when passing in raw numpy array of audio data.
caption (string) Caption to display with audio.

Methods

durations

View source

@classmethod
durations(
    audio_list
)

resolve_ref

View source

resolve_ref()

sample_rates

View source

@classmethod
sample_rates(
    audio_list
)

4.6.2 - BoundingBoxes2D

Format images with 2D bounding box overlays for logging to W&B.

BoundingBoxes2D(
    val: dict,
    key: str
) -> None
Args
val (dictionary) A dictionary of the following form: box_data: (list of dictionaries) One dictionary for each bounding box, containing: position: (dictionary) the position and size of the bounding box, in one of two formats Note that boxes need not all use the same format. {“minX”, “minY”, “maxX”, “maxY”}: (dictionary) A set of coordinates defining the upper and lower bounds of the box (the bottom left and top right corners) {“middle”, “width”, “height”}: (dictionary) A set of coordinates defining the center and dimensions of the box, with “middle” as a list [x, y] for the center point and “width” and “height” as numbers domain: (string) One of two options for the bounding box coordinate domain null: By default, or if no argument is passed, the coordinate domain is assumed to be relative to the original image, expressing this box as a fraction or percentage of the original image. This means all coordinates and dimensions passed into the “position” argument are floating point numbers between 0 and 1. “pixel”: (string literal) The coordinate domain is set to the pixel space. This means all coordinates and dimensions passed into “position” are integers within the bounds of the image dimensions. class_id: (integer) The class label id for this box scores: (dictionary of string to number, optional) A mapping of named fields to numerical values (float or int), can be used for filtering boxes in the UI based on a range of values for the corresponding field box_caption: (string, optional) A string to be displayed as the label text above this box in the UI, often composed of the class label, class name, and/or scores class_labels: (dictionary, optional) A map of integer class labels to their readable class names
key (string) The readable name or id for this set of bounding boxes (e.g. predictions, ground_truth)

Examples:

Log bounding boxes for a single image

import numpy as np
import wandb

run = wandb.init()
image = np.random.randint(low=0, high=256, size=(200, 300, 3))

class_labels = {0: "person", 1: "car", 2: "road", 3: "building"}

img = wandb.Image(
    image,
    boxes={
        "predictions": {
            "box_data": [
                {
                    # one box expressed in the default relative/fractional domain
                    "position": {
                        "minX": 0.1,
                        "maxX": 0.2,
                        "minY": 0.3,
                        "maxY": 0.4,
                    },
                    "class_id": 1,
                    "box_caption": class_labels[1],
                    "scores": {"acc": 0.2, "loss": 1.2},
                },
                {
                    # another box expressed in the pixel domain
                    "position": {
                        "middle": [150, 20],
                        "width": 68,
                        "height": 112,
                    },
                    "domain": "pixel",
                    "class_id": 3,
                    "box_caption": "a building",
                    "scores": {"acc": 0.5, "loss": 0.7},
                },
                # Log as many boxes an as needed
            ],
            "class_labels": class_labels,
        }
    },
)

run.log({"driving_scene": img})

Log a bounding box overlay to a Table

import numpy as np
import wandb

run = wandb.init()
image = np.random.randint(low=0, high=256, size=(200, 300, 3))

class_labels = {0: "person", 1: "car", 2: "road", 3: "building"}

class_set = wandb.Classes(
    [
        {"name": "person", "id": 0},
        {"name": "car", "id": 1},
        {"name": "road", "id": 2},
        {"name": "building", "id": 3},
    ]
)

img = wandb.Image(
    image,
    boxes={
        "predictions": {
            "box_data": [
                {
                    # one box expressed in the default relative/fractional domain
                    "position": {
                        "minX": 0.1,
                        "maxX": 0.2,
                        "minY": 0.3,
                        "maxY": 0.4,
                    },
                    "class_id": 1,
                    "box_caption": class_labels[1],
                    "scores": {"acc": 0.2, "loss": 1.2},
                },
                {
                    # another box expressed in the pixel domain
                    "position": {
                        "middle": [150, 20],
                        "width": 68,
                        "height": 112,
                    },
                    "domain": "pixel",
                    "class_id": 3,
                    "box_caption": "a building",
                    "scores": {"acc": 0.5, "loss": 0.7},
                },
                # Log as many boxes an as needed
            ],
            "class_labels": class_labels,
        }
    },
    classes=class_set,
)

table = wandb.Table(columns=["image"])
table.add_data(img)
run.log({"driving_scene": table})

Methods

type_name

View source

@classmethod
type_name() -> str

validate

View source

validate(
    val: dict
) -> bool

4.6.3 - Graph

Wandb class for graphs.

Graph(
    format="keras"
)

This class is typically used for saving and displaying neural net models. It represents the graph as an array of nodes and edges. The nodes can have labels that can be visualized by wandb.

Examples:

Import a keras model:

Graph.from_keras(keras_model)

Methods

add_edge

View source

add_edge(
    from_node, to_node
)

add_node

View source

add_node(
    node=None, **node_kwargs
)

from_keras

View source

@classmethod
from_keras(
    model
)

pprint

View source

pprint()

__getitem__

View source

__getitem__(
    nid
)

4.6.4 - Histogram

wandb class for histograms.

Histogram(
    sequence: Optional[Sequence] = None,
    np_histogram: Optional['NumpyHistogram'] = None,
    num_bins: int = 64
) -> None

This object works just like numpy’s histogram function https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html

Examples:

Generate histogram from a sequence

wandb.Histogram([1, 2, 3])

Efficiently initialize from np.histogram.

hist = np.histogram(data)
wandb.Histogram(np_histogram=hist)
Args
sequence (array_like) input data for histogram
np_histogram (numpy histogram) alternative input of a precomputed histogram
num_bins (int) Number of bins for the histogram. The default number of bins is 64. The maximum number of bins is 512
Attributes
bins ([float]) edges of bins
histogram ([int]) number of elements falling in each bin
Class Variables
MAX_LENGTH 512

4.6.5 - Html

A class for logging HTML content to W&B.

Html(
    data: Union[str, 'TextIO'],
    inject: bool = (True),
    data_is_not_path: bool = (False)
) -> None
Args
data A string that is a path to a file with the extension “.html”, or a string or IO object containing literal HTML.
inject Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged.
data_is_not_path If set to False, the data will be treated as a path to a file.

Methods

inject_head

View source

inject_head() -> None

4.6.6 - Image

Format images for logging to W&B.

Image(
    data_or_path: "ImageDataOrPathType",
    mode: Optional[str] = None,
    caption: Optional[str] = None,
    grouping: Optional[int] = None,
    classes: Optional[Union['Classes', Sequence[dict]]] = None,
    boxes: Optional[Union[Dict[str, 'BoundingBoxes2D'], Dict[str, dict]]] = None,
    masks: Optional[Union[Dict[str, 'ImageMask'], Dict[str, dict]]] = None,
    file_type: Optional[str] = None
) -> None
Args
data_or_path (numpy array, string, io) Accepts numpy array of image data, or a PIL image. The class attempts to infer the data format and converts it.
mode (string) The PIL mode for an image. Most common are “L”, “RGB”, “RGBA”. Full explanation at https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes
caption (string) Label for display of image.

Note : When logging a torch.Tensor as a wandb.Image, images are normalized. If you do not want to normalize your images, please convert your tensors to a PIL Image.

Examples:

Create a wandb.Image from a numpy array

import numpy as np
import wandb

with wandb.init() as run:
    examples = []
    for i in range(3):
        pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
        image = wandb.Image(pixels, caption=f"random field {i}")
        examples.append(image)
    run.log({"examples": examples})

Create a wandb.Image from a PILImage

import numpy as np
from PIL import Image as PILImage
import wandb

with wandb.init() as run:
    examples = []
    for i in range(3):
        pixels = np.random.randint(
            low=0, high=256, size=(100, 100, 3), dtype=np.uint8
        )
        pil_image = PILImage.fromarray(pixels, mode="RGB")
        image = wandb.Image(pil_image, caption=f"random field {i}")
        examples.append(image)
    run.log({"examples": examples})

log .jpg rather than .png (default)

import numpy as np
import wandb

with wandb.init() as run:
    examples = []
    for i in range(3):
        pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
        image = wandb.Image(pixels, caption=f"random field {i}", file_type="jpg")
        examples.append(image)
    run.log({"examples": examples})
Attributes

Methods

all_boxes

View source

@classmethod
all_boxes(
    images: Sequence['Image'],
    run: "LocalRun",
    run_key: str,
    step: Union[int, str]
) -> Union[List[Optional[dict]], bool]

all_captions

View source

@classmethod
all_captions(
    images: Sequence['Media']
) -> Union[bool, Sequence[Optional[str]]]

all_masks

View source

@classmethod
all_masks(
    images: Sequence['Image'],
    run: "LocalRun",
    run_key: str,
    step: Union[int, str]
) -> Union[List[Optional[dict]], bool]

guess_mode

View source

guess_mode(
    data: Union['np.ndarray', 'torch.Tensor'],
    file_type: Optional[str] = None
) -> str

Guess what type of image the np.array is representing.

to_uint8

View source

@classmethod
to_uint8(
    data: "np.ndarray"
) -> "np.ndarray"

Convert image data to uint8.

Convert floating point image on the range [0,1] and integer images on the range [0,255] to uint8, clipping if necessary.

Class Variables
MAX_DIMENSION 65500
MAX_ITEMS 108

4.6.7 - ImageMask

Format image masks or overlays for logging to W&B.

ImageMask(
    val: dict,
    key: str
) -> None
Args
val (dictionary) One of these two keys to represent the image: mask_data : (2D numpy array) The mask containing an integer class label for each pixel in the image path : (string) The path to a saved image file of the mask class_labels : (dictionary of integers to strings, optional) A mapping of the integer class labels in the mask to readable class names. These will default to class_0, class_1, class_2, etc.
key (string) The readable name or id for this mask type (e.g. predictions, ground_truth)

Examples:

Logging a single masked image

import numpy as np
import wandb

run = wandb.init()
image = np.random.randint(low=0, high=256, size=(100, 100, 3), dtype=np.uint8)
predicted_mask = np.empty((100, 100), dtype=np.uint8)
ground_truth_mask = np.empty((100, 100), dtype=np.uint8)

predicted_mask[:50, :50] = 0
predicted_mask[50:, :50] = 1
predicted_mask[:50, 50:] = 2
predicted_mask[50:, 50:] = 3

ground_truth_mask[:25, :25] = 0
ground_truth_mask[25:, :25] = 1
ground_truth_mask[:25, 25:] = 2
ground_truth_mask[25:, 25:] = 3

class_labels = {0: "person", 1: "tree", 2: "car", 3: "road"}

masked_image = wandb.Image(
    image,
    masks={
        "predictions": {
            "mask_data": predicted_mask,
            "class_labels": class_labels,
        },
        "ground_truth": {
            "mask_data": ground_truth_mask,
            "class_labels": class_labels,
        },
    },
)
run.log({"img_with_masks": masked_image})

Log a masked image inside a Table

import numpy as np
import wandb

run = wandb.init()
image = np.random.randint(low=0, high=256, size=(100, 100, 3), dtype=np.uint8)
predicted_mask = np.empty((100, 100), dtype=np.uint8)
ground_truth_mask = np.empty((100, 100), dtype=np.uint8)

predicted_mask[:50, :50] = 0
predicted_mask[50:, :50] = 1
predicted_mask[:50, 50:] = 2
predicted_mask[50:, 50:] = 3

ground_truth_mask[:25, :25] = 0
ground_truth_mask[25:, :25] = 1
ground_truth_mask[:25, 25:] = 2
ground_truth_mask[25:, 25:] = 3

class_labels = {0: "person", 1: "tree", 2: "car", 3: "road"}

class_set = wandb.Classes(
    [
        {"name": "person", "id": 0},
        {"name": "tree", "id": 1},
        {"name": "car", "id": 2},
        {"name": "road", "id": 3},
    ]
)

masked_image = wandb.Image(
    image,
    masks={
        "predictions": {
            "mask_data": predicted_mask,
            "class_labels": class_labels,
        },
        "ground_truth": {
            "mask_data": ground_truth_mask,
            "class_labels": class_labels,
        },
    },
    classes=class_set,
)

table = wandb.Table(columns=["image"])
table.add_data(masked_image)
run.log({"random_field": table})

Methods

type_name

View source

@classmethod
type_name() -> str

validate

View source

validate(
    val: dict
) -> bool

4.6.8 - Molecule

Wandb class for 3D Molecular data.

Molecule(
    data_or_path: Union[str, 'TextIO'],
    caption: Optional[str] = None,
    **kwargs
) -> None
Args
data_or_path (string, io) Molecule can be initialized from a file name or an io object.
caption (string) Caption associated with the molecule for display.

Methods

from_rdkit

View source

@classmethod
from_rdkit(
    data_or_path: "RDKitDataType",
    caption: Optional[str] = None,
    convert_to_3d_and_optimize: bool = (True),
    mmff_optimize_molecule_max_iterations: int = 200
) -> "Molecule"

Convert RDKit-supported file/object types to wandb.Molecule.

Args
data_or_path (string, rdkit.Chem.rdchem.Mol) Molecule can be initialized from a file name or an rdkit.Chem.rdchem.Mol object.
caption (string) Caption associated with the molecule for display.
convert_to_3d_and_optimize (bool) Convert to rdkit.Chem.rdchem.Mol with 3D coordinates. This is an expensive operation that may take a long time for complicated molecules.
mmff_optimize_molecule_max_iterations (int) Number of iterations to use in rdkit.Chem.AllChem.MMFFOptimizeMolecule

from_smiles

View source

@classmethod
from_smiles(
    data: str,
    caption: Optional[str] = None,
    sanitize: bool = (True),
    convert_to_3d_and_optimize: bool = (True),
    mmff_optimize_molecule_max_iterations: int = 200
) -> "Molecule"

Convert SMILES string to wandb.Molecule.

Args
data (string) SMILES string.
caption (string) Caption associated with the molecule for display
sanitize (bool) Check if the molecule is chemically reasonable by the RDKit’s definition.
convert_to_3d_and_optimize (bool) Convert to rdkit.Chem.rdchem.Mol with 3D coordinates. This is an expensive operation that may take a long time for complicated molecules.
mmff_optimize_molecule_max_iterations (int) Number of iterations to use in rdkit.Chem.AllChem.MMFFOptimizeMolecule
Class Variables
SUPPORTED_RDKIT_TYPES
SUPPORTED_TYPES

4.6.9 - Object3D

Wandb class for 3D point clouds.

Object3D(
    data_or_path: Union['np.ndarray', str, 'TextIO', dict],
    caption: Optional[str] = None,
    **kwargs
) -> None
Args
data_or_path (numpy array, string, io) Object3D can be initialized from a file or a numpy array. You can pass a path to a file or an io object and a file_type which must be one of SUPPORTED_TYPES

The shape of the numpy array must be one of either:

[[x y z],       ...] nx3
[[x y z c],     ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color

Methods

from_file

View source

@classmethod
from_file(
    data_or_path: Union['TextIO', str],
    file_type: Optional['FileFormat3D'] = None
) -> "Object3D"

Initializes Object3D from a file or stream.

Args
data_or_path (Union[“TextIO”, str]): A path to a file or a TextIO stream. file_type (str): Specifies the data format passed to data_or_path. Required when data_or_path is a TextIO stream. This parameter is ignored if a file path is provided. The type is taken from the file extension.

from_numpy

View source

@classmethod
from_numpy(
    data: "np.ndarray"
) -> "Object3D"

Initializes Object3D from a numpy array.

Args
data (numpy array): Each entry in the array will represent one point in the point cloud.

The shape of the numpy array must be one of either:

[[x y z],       ...]  # nx3.
[[x y z c],     ...]  # nx4 where c is a category with supported range [1, 14].
[[x y z r g b], ...]  # nx6 where is rgb is color.

from_point_cloud

View source

@classmethod
from_point_cloud(
    points: Sequence['Point'],
    boxes: Sequence['Box3D'],
    vectors: Optional[Sequence['Vector3D']] = None,
    point_cloud_type: "PointCloudType" = "lidar/beta"
) -> "Object3D"

Initializes Object3D from a python object.

Args
points (Sequence[“Point”]): The points in the point cloud. boxes (Sequence[“Box3D”]): 3D bounding boxes for labeling the point cloud. Boxes are displayed in point cloud visualizations. vectors (Optional[Sequence[“Vector3D”]]): Each vector is displayed in the point cloud visualization. Can be used to indicate directionality of bounding boxes. Defaults to None. point_cloud_type (“lidar/beta”): At this time, only the “lidar/beta” type is supported. Defaults to “lidar/beta”.
Class Variables
SUPPORTED_POINT_CLOUD_TYPES
SUPPORTED_TYPES

4.6.10 - Plotly

Wandb class for plotly plots.

Plotly(
    val: Union['plotly.Figure', 'matplotlib.artist.Artist']
)
Args
val matplotlib or plotly figure

Methods

make_plot_media

View source

@classmethod
make_plot_media(
    val: Union['plotly.Figure', 'matplotlib.artist.Artist']
) -> Union[Image, 'Plotly']

4.6.11 - Table

The Table class used to display and analyze tabular data.

Table(
    columns=None, data=None, rows=None, dataframe=None, dtype=None, optional=(True),
    allow_mixed_types=(False)
)

Unlike traditional spreadsheets, Tables support numerous types of data: scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media. This means you can embed Images, Video, Audio, and other sorts of rich, annotated media directly in Tables, alongside other traditional scalar values.

This class is the primary class used to generate the Table Visualizer in the UI: https://docs.wandb.ai/guides/data-vis/tables.

Args
columns (List[str]) Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].
data (List[List[any]]) 2D row-oriented array of values.
dataframe (pandas.DataFrame) DataFrame object used to create the table. When set, data and columns arguments are ignored.
optional (Union[bool,List[bool]]) Determines if None values are allowed. Default to True - If a singular bool value, then the optionality is enforced for all columns specified at construction time - If a list of bool values, then the optionality is applied to each column - should be the same length as columns applies to all columns. A list of bool values applies to each respective column.
allow_mixed_types (bool) Determines if columns are allowed to have mixed types (disables type validation). Defaults to False

Methods

add_column

View source

add_column(
    name, data, optional=(False)
)

Adds a column of data to the table.

Args
name (str) - the unique name of the column
data (list
optional (bool) - if null-like values are permitted

add_computed_columns

View source

add_computed_columns(
    fn
)

Adds one or more computed columns based on existing data.

Args
fn A function which accepts one or two parameters, ndx (int) and row (dict), which is expected to return a dict representing new columns for that row, keyed by the new column names. ndx is an integer representing the index of the row. Only included if include_ndx is set to True. row is a dictionary keyed by existing columns

add_data

View source

add_data(
    *data
)

Adds a new row of data to the table. The maximum amount of rows in a table is determined by wandb.Table.MAX_ARTIFACT_ROWS.

The length of the data should match the length of the table column.

add_row

View source

add_row(
    *row
)

Deprecated; use add_data instead.

cast

View source

cast(
    col_name, dtype, optional=(False)
)

Casts a column to a specific data type.

This can be one of the normal python classes, an internal W&B type, or an example object, like an instance of wandb.Image or wandb.Classes.

Args
col_name (str) - The name of the column to cast.
dtype (class, wandb.wandb_sdk.interface._dtypes.Type, any) - The target dtype.
optional (bool) - If the column should allow Nones.

get_column

View source

get_column(
    name, convert_to=None
)

Retrieves a column from the table and optionally converts it to a NumPy object.

Args
name (str) - the name of the column
convert_to (str, optional) - “numpy”: will convert the underlying data to numpy object

get_dataframe

View source

get_dataframe()

Returns a pandas.DataFrame of the table.

get_index

View source

get_index()

Returns an array of row indexes for use in other tables to create links.

index_ref

View source

index_ref(
    index
)

Gets a reference of the index of a row in the table.

iterrows

View source

iterrows()

Returns the table data by row, showing the index of the row and the relevant data.

Yields

index : int The index of the row. Using this value in other W&B tables will automatically build a relationship between the tables row : List[any] The data of the row.

set_fk

View source

set_fk(
    col_name, table, table_col
)

set_pk

View source

set_pk(
    col_name
)
Class Variables
MAX_ARTIFACT_ROWS 200000
MAX_ROWS 10000

4.6.12 - Video

A class for logging videos to W&B.

Video(
    data_or_path: Union['np.ndarray', str, 'TextIO', 'BytesIO'],
    caption: Optional[str] = None,
    fps: Optional[int] = None,
    format: Optional[Literal['gif', 'mp4', 'webm', 'ogg']] = None
)
Args
data_or_path Video can be initialized with a path to a file or an io object. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. The dimensions should be (number of frames, channel, height, width) or (batch, number of frames, channel, height, width) The format parameter must be specified with the format argument when initializing with a numpy array or io object.
caption Caption associated with the video for display.
fps The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes.
format Format of video, necessary if initializing with a numpy array or io object. This parameter will be used to determine the format to use when encoding the video data. Accepted values are “gif”, “mp4”, “webm”, or “ogg”.

Methods

encode

View source

encode(
    fps: int = 4
) -> None
Class Variables
EXTS

4.6.13 - WBTraceTree

Media object for trace tree data.

WBTraceTree(
    root_span: Span,
    model_dict: typing.Optional[dict] = None
)
Args
root_span (Span): The root span of the trace tree. model_dict (dict, optional): A dictionary containing the model dump. NOTE: model_dict is a completely-user-defined dict. The UI will render a JSON viewer for this dict, giving special treatment to dictionaries with a _kind key. This is because model vendors have such different serialization formats that we need to be flexible here.

4.7 - finish

Finish a run and upload any remaining data.

finish(
    exit_code: (int | None) = None,
    quiet: (bool | None) = None
) -> None

Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.

Run States:

  • Running: Active run that is logging data and/or sending heartbeats.
  • Crashed: Run that stopped sending heartbeats unexpectedly.
  • Finished: Run completed successfully (exit_code=0) with all data synced.
  • Failed: Run completed with errors (exit_code!=0).
Args
exit_code Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.
quiet Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).

4.8 - Import & Export API

Classes

class Api: Used for querying the wandb server.

class File: File is a class associated with a file saved by wandb.

class Files: An iterable collection of File objects.

class Job

class Project: A project is a namespace for runs.

class Projects: An iterable collection of Project objects.

class QueuedRun: A single queued run associated with an entity and project. Call run = queued_run.wait_until_running() or run = queued_run.wait_until_finished() to access the run.

class Run: A single run associated with an entity and project.

class RunQueue

class Runs: An iterable collection of runs associated with a project and optional filter.

class Sweep: A set of runs associated with a sweep.

4.8.1 - Api

Used for querying the wandb server.

Api(
    overrides: Optional[Dict[str, Any]] = None,
    timeout: Optional[int] = None,
    api_key: Optional[str] = None
) -> None

Examples:

Most common way to initialize

>>> wandb.Api()
Args
overrides (dict) You can set base_url if you are using a wandb server other than https://api.wandb.ai. You can also set defaults for entity, project, and run.
Attributes

Methods

artifact

View source

artifact(
    name: str,
    type: Optional[str] = None
)

Return a single artifact by parsing path in the form project/name or entity/project/name.

Args
name (str) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: name:version name:alias
type (str, optional) The type of artifact to fetch.
Returns
An Artifact object.
Raises
ValueError If the artifact name is not specified.
ValueError If the artifact type is specified but does not match the type of the fetched artifact.

Note:

This method is intended for external use only. Do not call api.artifact() within the wandb repository code.

artifact_collection

View source

artifact_collection(
    type_name: str,
    name: str
) -> "public.ArtifactCollection"

Return a single artifact collection by type and parsing path in the form entity/project/name.

Args
type_name (str) The type of artifact collection to fetch.
name (str) An artifact collection name. May be prefixed with entity/project.
Returns
An ArtifactCollection object.

artifact_collection_exists

View source

artifact_collection_exists(
    name: str,
    type: str
) -> bool

Return whether an artifact collection exists within a specified project and entity.

Args
name (str) An artifact collection name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”.
type (str) The type of artifact collection
Returns
True if the artifact collection exists, False otherwise.

artifact_collections

View source

artifact_collections(
    project_name: str,
    type_name: str,
    per_page: int = 50
) -> "public.ArtifactCollections"

Return a collection of matching artifact collections.

Args
project_name (str) The name of the project to filter on.
type_name (str) The name of the artifact type to filter on.
per_page (int) Sets the page size for query pagination. Usually there is no reason to change this.
Returns
An iterable ArtifactCollections object.

artifact_exists

View source

artifact_exists(
    name: str,
    type: Optional[str] = None
) -> bool

Return whether an artifact version exists within a specified project and entity.

Args
name (str) An artifact name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”. Valid names can be in the following forms: name:version name:alias
type (str, optional) The type of artifact
Returns
True if the artifact version exists, False otherwise.

artifact_type

View source

artifact_type(
    type_name: str,
    project: Optional[str] = None
) -> "public.ArtifactType"

Return the matching ArtifactType.

Args
type_name (str) The name of the artifact type to retrieve.
project (str, optional) If given, a project name or path to filter on.
Returns
An ArtifactType object.

artifact_types

View source

artifact_types(
    project: Optional[str] = None
) -> "public.ArtifactTypes"

Return a collection of matching artifact types.

Args
project (str, optional) If given, a project name or path to filter on.
Returns
An iterable ArtifactTypes object.

artifact_versions

View source

artifact_versions(
    type_name, name, per_page=50
)

Deprecated, use artifacts(type_name, name) instead.

artifacts

View source

artifacts(
    type_name: str,
    name: str,
    per_page: int = 50,
    tags: Optional[List[str]] = None
) -> "public.Artifacts"

Return an Artifacts collection from the given parameters.

Args
type_name (str) The type of artifacts to fetch.
name (str) An artifact collection name. May be prefixed with entity/project.
per_page (int) Sets the page size for query pagination. Usually there is no reason to change this.
tags (list[str], optional) Only return artifacts with all of these tags.
Returns
An iterable Artifacts object.

automation

View source

automation(
    name: str,
    *,
    entity: Optional[str] = None
) -> "Automation"

Returns the only Automation matching the parameters.

Args
name The name of the automation to fetch.
entity The entity to fetch the automation for.
Raises
ValueError If zero or multiple Automations match the search criteria.

Examples:

Get an existing automation named “my-automation”:

import wandb

api = wandb.Api()
automation = api.automation(name="my-automation")

Get an existing automation named “other-automation”, from the entity “my-team”:

automation = api.automation(name="other-automation", entity="my-team")

automations

View source

automations(
    entity: Optional[str] = None,
    *,
    name: Optional[str] = None,
    per_page: int = 50
) -> Iterator['Automation']

Returns an iterator over all Automations that match the given parameters.

If no parameters are provided, the returned iterator will contain all Automations that the user has access to.

Args
entity The entity to fetch the automations for.
name The name of the automation to fetch.
per_page The number of automations to fetch per page. Defaults to 50. Usually there is no reason to change this.
Returns
A list of automations.

Examples:

Fetch all existing automations for the entity “my-team”:

import wandb

api = wandb.Api()
automations = api.automations(entity="my-team")

create_automation

View source

create_automation(
    obj: "NewAutomation",
    *,
    fetch_existing: bool = (False),
    **kwargs
) -> "Automation"

Create a new Automation.

Args
obj The automation to create.
fetch_existing If True, and a conflicting automation already exists, attempt to fetch the existing automation instead of raising an error.
**kwargs Any additional values to assign to the automation before creating it. If given, these will override any values that may already be set on the automation: - name: The name of the automation. - description: The description of the automation. - enabled: Whether the automation is enabled. - scope: The scope of the automation. - event: The event that triggers the automation. - action: The action that is triggered by the automation.
Returns
The saved Automation.

Examples:

Create a new automation named “my-automation” that sends a Slack notification when a run within a specific project logs a metric exceeding a custom threshold:

import wandb
from wandb.automations import OnRunMetric, RunEvent, SendNotification

api = wandb.Api()

project = api.project("my-project", entity="my-team")

# Use the first Slack integration for the team
slack_hook = next(api.slack_integrations(entity="my-team"))

event = OnRunMetric(
    scope=project,
    filter=RunEvent.metric("custom-metric") > 10,
)
action = SendNotification.from_integration(slack_hook)

automation = api.create_automation(
    event >> action,
    name="my-automation",
    description="Send a Slack message whenever 'custom-metric' exceeds 10.",
)

create_project

View source

create_project(
    name: str,
    entity: str
) -> None

Create a new project.

Args
name (str) The name of the new project.
entity (str) The entity of the new project.

create_registry

View source

create_registry(
    name: str,
    visibility: Literal['organization', 'restricted'],
    organization: Optional[str] = None,
    description: Optional[str] = None,
    artifact_types: Optional[List[str]] = None
) -> Registry

Create a new registry.

Args
name The name of the registry. Name must be unique within the organization.
visibility The visibility of the registry. organization: Anyone in the organization can view this registry. You can edit their roles later from the settings in the UI. restricted: Only invited members via the UI can access this registry. Public sharing is disabled.
organization The organization of the registry. If no organization is set in the settings, the organization will be fetched from the entity if the entity only belongs to one organization.
description The description of the registry.
artifact_types The accepted artifact types of the registry. A type is no more than 128 characters and do not include characters / or :. If not specified, all types are accepted. Allowed types added to the registry cannot be removed later.
Returns
A registry object.

Examples:

import wandb

api = wandb.Api()
registry = api.create_registry(
    name="my-registry",
    visibility="restricted",
    organization="my-org",
    description="This is a test registry",
    artifact_types=["model"],
)

create_run

View source

create_run(
    *,
    run_id: Optional[str] = None,
    project: Optional[str] = None,
    entity: Optional[str] = None
) -> "public.Run"

Create a new run.

Args
run_id (str, optional) The ID to assign to the run, if given. The run ID is automatically generated by default, so in general, you do not need to specify this and should only do so at your own risk.
project (str, optional) If given, the project of the new run.
entity (str, optional) If given, the entity of the new run.
Returns
The newly created Run.

create_run_queue

View source

create_run_queue(
    name: str,
    type: "public.RunQueueResourceType",
    entity: Optional[str] = None,
    prioritization_mode: Optional['public.RunQueuePrioritizationMode'] = None,
    config: Optional[dict] = None,
    template_variables: Optional[dict] = None
) -> "public.RunQueue"

Create a new run queue (launch).

Args
name (str) Name of the queue to create
type (str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.
entity (str) Optional name of the entity to create the queue. If None, will use the configured or default entity.
prioritization_mode (str) Optional version of prioritization to use. Either “V0” or None
config (dict) Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}}) to specify template variables.
template_variables (dict) A dictionary of template variable schemas to be used with the config. Expected format of: { "var-name": { "schema": { "type": ("string", "number", or "integer"), "default": (optional value), "minimum": (optional minimum), "maximum": (optional maximum), "enum": [..."(options)"] } } }
Returns
The newly created RunQueue
Raises
ValueError if any of the parameters are invalid wandb.Error on wandb API errors

create_team

View source

create_team(
    team, admin_username=None
)

Create a new team.

Args
team (str) The name of the team
admin_username (str) optional username of the admin user of the team, defaults to the current user.
Returns
A Team object

create_user

View source

create_user(
    email, admin=(False)
)

Create a new user.

Args
email (str) The email address of the user
admin (bool) Whether this user should be a global instance admin
Returns
A User object

delete_automation

View source

delete_automation(
    obj: Union['Automation', str]
) -> Literal[True]

Delete an automation.

Args
obj The automation to delete, or its ID.
Returns
True if the automation was deleted successfully.

flush

View source

flush()

Flush the local cache.

The api object keeps a local cache of runs, so if the state of the run may change while executing your script you must clear the local cache with api.flush() to get the latest values associated with the run.

from_path

View source

from_path(
    path
)

Return a run, sweep, project or report from a path.

Examples:

project = api.from_path("my_project")
team_project = api.from_path("my_team/my_project")
run = api.from_path("my_team/my_project/runs/id")
sweep = api.from_path("my_team/my_project/sweeps/id")
report = api.from_path("my_team/my_project/reports/My-Report-Vm11dsdf")
Args
path (str) The path to the project, run, sweep or report
Returns
A Project, Run, Sweep, or BetaReport instance.
Raises
wandb.Error if path is invalid or the object doesn’t exist

integrations

View source

integrations(
    entity: Optional[str] = None,
    *,
    per_page: int = 50
) -> Iterator['Integration']

Return an iterator of all integrations for an entity.

Args
entity The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used.
per_page Number of integrations to fetch per page. Defaults to 50. Usually there is no reason to change this.
Yields
Iterator[SlackIntegration WebhookIntegration]: An iterator of any supported integrations.

job

View source

job(
    name: Optional[str],
    path: Optional[str] = None
) -> "public.Job"

Return a Job from the given parameters.

Args
name (str) The job name.
path (str, optional) If given, the root path in which to download the job artifact.
Returns
A Job object.

list_jobs

View source

list_jobs(
    entity: str,
    project: str
) -> List[Dict[str, Any]]

Return a list of jobs, if any, for the given entity and project.

Args
entity (str) The entity for the listed job(s).
project (str) The project for the listed job(s).
Returns
A list of matching jobs.

project

View source

project(
    name: str,
    entity: Optional[str] = None
) -> "public.Project"

Return the Project with the given name (and entity, if given).

Args
name (str) The project name.
entity (str) Name of the entity requested. If None, will fall back to the default entity passed to Api. If no default entity, will raise a ValueError.
Returns
A Project object.

projects

View source

projects(
    entity: Optional[str] = None,
    per_page: int = 200
) -> "public.Projects"

Get projects for a given entity.

Args
entity (str) Name of the entity requested. If None, will fall back to the default entity passed to Api. If no default entity, will raise a ValueError.
per_page (int) Sets the page size for query pagination. Usually there is no reason to change this.
Returns
A Projects object which is an iterable collection of Project objects.

queued_run

View source

queued_run(
    entity, project, queue_name, run_queue_item_id, project_queue=None,
    priority=None
)

Return a single queued run based on the path.

Parses paths of the form entity/project/queue_id/run_queue_item_id.

registries

View source

registries(
    organization: Optional[str] = None,
    filter: Optional[Dict[str, Any]] = None
) -> Registries

Returns a Registry iterator.

Use the iterator to search and filter registries, collections, or artifact versions across your organization’s registry.

Examples:

Find all registries with the names that contain “model”

import wandb

api = wandb.Api()  # specify an org if your entity belongs to multiple orgs
api.registries(filter={"name": {"$regex": "model"}})

Find all collections in the registries with the name “my_collection” and the tag “my_tag”

api.registries().collections(filter={"name": "my_collection", "tag": "my_tag"})

Find all artifact versions in the registries with a collection name that contains “my_collection” and a version that has the alias “best”

api.registries().collections(
    filter={"name": {"$regex": "my_collection"}}
).versions(filter={"alias": "best"})

Find all artifact versions in the registries that contain “model” and have the tag “prod” or alias “best”

api.registries(filter={"name": {"$regex": "model"}}).versions(
    filter={"$or": [{"tag": "prod"}, {"alias": "best"}]}
)
Args
organization (str, optional) The organization of the registry to fetch. If not specified, use the organization specified in the user’s settings.
filter (dict, optional) MongoDB-style filter to apply to each object in the registry iterator. Fields available to filter for collections are name, description, created_at, updated_at. Fields available to filter for collections are name, tag, description, created_at, updated_at Fields available to filter for versions are tag, alias, created_at, updated_at, metadata
Returns
A registry iterator.

registry

View source

registry(
    name: str,
    organization: Optional[str] = None
) -> Registry

Return a registry given a registry name.

Args
name The name of the registry. This is without the wandb-registry- prefix.
organization The organization of the registry. If no organization is set in the settings, the organization will be fetched from the entity if the entity only belongs to one organization.
Returns
A registry object.

Examples:

Fetch and update a registry

import wandb

api = wandb.Api()
registry = api.registry(name="my-registry", organization="my-org")
registry.description = "This is an updated description"
registry.save()

reports

View source

reports(
    path: str = "",
    name: Optional[str] = None,
    per_page: int = 50
) -> "public.Reports"

Get reports for a given project path.

WARNING: This api is in beta and will likely change in a future release

Args
path (str) path to project the report resides in, should be in the form: “entity/project”
name (str, optional) optional name of the report requested.
per_page (int) Sets the page size for query pagination. Usually there is no reason to change this.
Returns
A Reports object which is an iterable collection of BetaReport objects.

run

View source

run(
    path=""
)

Return a single run by parsing path in the form entity/project/run_id.

Args
path (str) path to run in the form entity/project/run_id. If api.entity is set, this can be in the form project/run_id and if api.project is set this can just be the run_id.
Returns
A Run object.

run_queue

View source

run_queue(
    entity, name
)

Return the named RunQueue for entity.

To create a new RunQueue, use wandb.Api().create_run_queue(...).

runs

View source

runs(
    path: Optional[str] = None,
    filters: Optional[Dict[str, Any]] = None,
    order: str = "+created_at",
    per_page: int = 50,
    include_sweeps: bool = (True)
)

Return a set of runs from a project that match the filters provided.

Fields you can filter by include:

  • createdAt: The timestamp when the run was created. (in ISO 8601 format, e.g. “2023-01-01T12:00:00Z”)
  • displayName: The human-readable display name of the run. (e.g. “eager-fox-1”)
  • duration: The total runtime of the run in seconds.
  • group: The group name used to organize related runs together.
  • host: The hostname where the run was executed.
  • jobType: The type of job or purpose of the run.
  • name: The unique identifier of the run. (e.g. “a1b2cdef”)
  • state: The current state of the run.
  • tags: The tags associated with the run.
  • username: The username of the user who initiated the run

Additionally, you can filter by items in the run config or summary metrics. Such as config.experiment_name, summary_metrics.loss, etc.

For more complex filtering, you can use MongoDB query operators. For details, see: https://docs.mongodb.com/manual/reference/operator/query The following operations are supported:

  • $and
  • $or
  • $nor
  • $eq
  • $ne
  • $gt
  • $gte
  • $lt
  • $lte
  • $in
  • $nin
  • $exists
  • $regex

Examples:

Find runs in my_project where config.experiment_name has been set to “foo”

api.runs(
    path="my_entity/my_project",
    filters={"config.experiment_name": "foo"},
)

Find runs in my_project where config.experiment_name has been set to “foo” or “bar”

api.runs(
    path="my_entity/my_project",
    filters={
        "$or": [
            {"config.experiment_name": "foo"},
            {"config.experiment_name": "bar"},
        ]
    },
)

Find runs in my_project where config.experiment_name matches a regex (anchors are not supported)

api.runs(
    path="my_entity/my_project",
    filters={"config.experiment_name": {"$regex": "b.*"}},
)

Find runs in my_project where the run name matches a regex (anchors are not supported)

api.runs(
    path="my_entity/my_project",
    filters={"display_name": {"$regex": "^foo.*"}},
)

Find runs in my_project where config.experiment contains a nested field “category” with value “testing”

api.runs(
    path="my_entity/my_project",
    filters={"config.experiment.category": "testing"},
)

Find runs in my_project with a loss value of 0.5 nested in a dictionary under model1 in the summary metrics

api.runs(
    path="my_entity/my_project",
    filters={"summary_metrics.model1.loss": 0.5},
)

Find runs in my_project sorted by ascending loss

api.runs(path="my_entity/my_project", order="+summary_metrics.loss")
Args
path (str) path to project, should be in the form: “entity/project”
filters (dict) queries for specific runs using the MongoDB query language. You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc. For example: {"config.experiment_name": "foo"} would find runs with a config entry of experiment name set to “foo”
order (str) Order can be created_at, heartbeat_at, config.*.value, or summary_metrics.*. If you prepend order with a + order is ascending. If you prepend order with a - order is descending (default). The default order is run.created_at from oldest to newest.
per_page (int) Sets the page size for query pagination.
include_sweeps (bool) Whether to include the sweep runs in the results.
Returns
A Runs object, which is an iterable collection of Run objects.

slack_integrations

View source

slack_integrations(
    *,
    entity: Optional[str] = None,
    per_page: int = 50
) -> Iterator['SlackIntegration']

Returns an iterator of Slack integrations for an entity.

Args
entity The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used.
per_page Number of integrations to fetch per page. Defaults to 50. Usually there is no reason to change this.
Yields
Iterator[SlackIntegration]: An iterator of Slack integrations.

Examples:

Get all registered Slack integrations for the team “my-team”:

import wandb

api = wandb.Api()
slack_integrations = api.slack_integrations(entity="my-team")

Find only Slack integrations that post to channel names starting with “team-alerts-”:

slack_integrations = api.slack_integrations(entity="my-team")
team_alert_integrations = [
    ig
    for ig in slack_integrations
    if ig.channel_name.startswith("team-alerts-")
]

sweep

View source

sweep(
    path=""
)

Return a sweep by parsing path in the form entity/project/sweep_id.

Args
path (str, optional) path to sweep in the form entity/project/sweep_id. If api.entity is set, this can be in the form project/sweep_id and if api.project is set this can just be the sweep_id.
Returns
A Sweep object.

sync_tensorboard

View source

sync_tensorboard(
    root_dir, run_id=None, project=None, entity=None
)

Sync a local directory containing tfevent files to wandb.

team

View source

team(
    team: str
) -> "public.Team"

Return the matching Team with the given name.

Args
team (str) The name of the team.
Returns
A Team object.

update_automation

View source

update_automation(
    obj: "Automation",
    *,
    create_missing: bool = (False),
    **kwargs
) -> "Automation"

Update an existing automation.

Args
obj The automation to update. Must be an existing automation. create_missing (bool): If True, and the automation does not exist, create it.
**kwargs Any additional values to assign to the automation before updating it. If given, these will override any values that may already be set on the automation: - name: The name of the automation. - description: The description of the automation. - enabled: Whether the automation is enabled. - scope: The scope of the automation. - event: The event that triggers the automation. - action: The action that is triggered by the automation.
Returns
The updated automation.

Examples:

Disable and edit the description of an existing automation (“my-automation”):

import wandb

api = wandb.Api()

automation = api.automation(name="my-automation")
automation.enabled = False
automation.description = "Kept for reference, but no longer used."

updated_automation = api.update_automation(automation)
  • OR: ```python import wandb

api = wandb.Api()

automation = api.automation(name=“my-automation”)

updated_automation = api.update_automation( automation, enabled=False, description=“Kept for reference, but no longer used.”, )



### `upsert_run_queue`

[View source](https://www.github.com/wandb/wandb/tree/v0.19.11/wandb/apis/public/api.py#L466-L579)

```python
upsert_run_queue(
    name: str,
    resource_config: dict,
    resource_type: "public.RunQueueResourceType",
    entity: Optional[str] = None,
    template_variables: Optional[dict] = None,
    external_links: Optional[dict] = None,
    prioritization_mode: Optional['public.RunQueuePrioritizationMode'] = None
)

Upsert a run queue (launch).

Args
name (str) Name of the queue to create
entity (str) Optional name of the entity to create the queue. If None, will use the configured or default entity.
resource_config (dict) Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}}) to specify template variables.
resource_type (str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.
template_variables (dict) A dictionary of template variable schemas to be used with the config. Expected format of: { "var-name": { "schema": { "type": ("string", "number", or "integer"), "default": (optional value), "minimum": (optional minimum), "maximum": (optional maximum), "enum": [..."(options)"] } } }
external_links (dict) Optional dictionary of external links to be used with the queue. Expected format of: { "name": "url" }
prioritization_mode (str) Optional version of prioritization to use. Either “V0” or None
Returns
The upserted RunQueue.
Raises
ValueError if any of the parameters are invalid wandb.Error on wandb API errors

user

View source

user(
    username_or_email: str
) -> Optional['public.User']

Return a user from a username or email address.

Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer.

Args
username_or_email (str) The username or email address of the user
Returns
A User object or None if a user couldn’t be found

users

View source

users(
    username_or_email: str
) -> List['public.User']

Return all users from a partial username or email address query.

Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer.

Args
username_or_email (str) The prefix or suffix of the user you want to find
Returns
An array of User objects

webhook_integrations

View source

webhook_integrations(
    entity: Optional[str] = None,
    *,
    per_page: int = 50
) -> Iterator['WebhookIntegration']

Returns an iterator of webhook integrations for an entity.

Args
entity The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used.
per_page Number of integrations to fetch per page. Defaults to 50. Usually there is no reason to change this.
Yields
Iterator[WebhookIntegration]: An iterator of webhook integrations.

Examples:

Get all registered webhook integrations for the team “my-team”:

import wandb

api = wandb.Api()
webhook_integrations = api.webhook_integrations(entity="my-team")

Find only webhook integrations that post requests to “https://my-fake-url.com”:

webhook_integrations = api.webhook_integrations(entity="my-team")
my_webhooks = [
    ig
    for ig in webhook_integrations
    if ig.url_endpoint.startswith("https://my-fake-url.com")
]
Class Variables
CREATE_PROJECT
DEFAULT_ENTITY_QUERY
USERS_QUERY
VIEWER_QUERY

4.8.2 - File

File is a class associated with a file saved by wandb.

File(
    client, attrs, run=None
)
Attributes
path_uri Returns the uri path to the file in the storage bucket.

Methods

delete

View source

delete()

display

View source

display(
    height=420, hidden=(False)
) -> bool

Display this object in jupyter.

download

View source

download(
    root: str = ".",
    replace: bool = (False),
    exist_ok: bool = (False),
    api: Optional[Api] = None
) -> io.TextIOWrapper

Downloads a file previously saved by a run from the wandb server.

Args
replace (boolean): If True, download will overwrite a local file if it exists. Defaults to False. root (str): Local directory to save the file. Defaults to “.”. exist_ok (boolean): If True, will not raise ValueError if file already exists and will not re-download unless replace=True. Defaults to False. api (Api, optional): If given, the Api instance used to download the file.
Raises
ValueError if file already exists, replace=False and exist_ok=False.

snake_to_camel

View source

snake_to_camel(
    string
)

to_html

View source

to_html(
    *args, **kwargs
)

4.8.3 - Files

An iterable collection of File objects.

Files(
    client, run, names=None, per_page=50, upload=(False)
)
Attributes
cursor The start cursor to use for the next fetched page.
more Whether there are more pages to be fetched.

Methods

convert_objects

View source

convert_objects()

Convert the last fetched response data into the iterated objects.

next

View source

next() -> T

Return the next item from the iterator. When exhausted, raise StopIteration

update_variables

View source

update_variables()

Update the query variables for the next page fetch.

__getitem__

View source

__getitem__(
    index: (int | slice)
) -> (T | list[T])

__iter__

View source

__iter__() -> Iterator[T]

__len__

View source

__len__() -> int
Class Variables
QUERY

4.8.4 - Job

Job(
    api: "Api",
    name,
    path: Optional[str] = None
) -> None
Attributes

Methods

call

View source

call(
    config, project=None, entity=None, queue=None, resource="local-container",
    resource_args=None, template_variables=None, project_queue=None, priority=None
)

set_entrypoint

View source

set_entrypoint(
    entrypoint: List[str]
)

4.8.5 - Project

A project is a namespace for runs.

Project(
    client, entity, project, attrs
)
Attributes

Methods

artifacts_types

View source

artifacts_types(
    per_page=50
)

display

View source

display(
    height=420, hidden=(False)
) -> bool

Display this object in jupyter.

snake_to_camel

View source

snake_to_camel(
    string
)

sweeps

View source

sweeps()

to_html

View source

to_html(
    height=420, hidden=(False)
)

Generate HTML containing an iframe displaying this project.

4.8.6 - Projects

An iterable collection of Project objects.

Projects(
    client, entity, per_page=50
)
Attributes
cursor The start cursor to use for the next fetched page.
more Whether there are more pages to be fetched.

Methods

convert_objects

View source

convert_objects()

Convert the last fetched response data into the iterated objects.

next

View source

next() -> T

Return the next item from the iterator. When exhausted, raise StopIteration

update_variables

View source

update_variables() -> None

Update the query variables for the next page fetch.

__getitem__

View source

__getitem__(
    index: (int | slice)
) -> (T | list[T])

__iter__

View source

__iter__() -> Iterator[T]
Class Variables
QUERY

4.8.7 - QueuedRun

A single queued run associated with an entity and project. Call run = queued_run.wait_until_running() or run = queued_run.wait_until_finished() to access the run.

QueuedRun(
    client, entity, project, queue_name, run_queue_item_id,
    project_queue=LAUNCH_DEFAULT_PROJECT, priority=None
)
Attributes

Methods

delete

View source

delete(
    delete_artifacts=(False)
)

Delete the given queued run from the wandb backend.

wait_until_finished

View source

wait_until_finished()

wait_until_running

View source

wait_until_running()

4.8.8 - Registry

A single registry in the Registry.

Registry(
    client: "Client",
    organization: str,
    entity: str,
    name: str,
    attrs: Optional[Dict[str, Any]] = None
)
Attributes
allow_all_artifact_types Returns whether all artifact types are allowed in the registry. If True then artifacts of any type can be added to this registry. If False then artifacts are restricted to the types in artifact_types for this registry.
artifact_types Returns the artifact types allowed in the registry. If allow_all_artifact_types is True then artifact_types reflects the types previously saved or currently used in the registry. If allow_all_artifact_types is False then artifacts are restricted to the types in artifact_types.
created_at Timestamp of when the registry was created.
description Description of the registry.
entity Organization entity of the registry.
full_name Full name of the registry including the wandb-registry- prefix.
name Name of the registry without the wandb-registry- prefix.
organization Organization name of the registry.
updated_at Timestamp of when the registry was last updated.
visibility Visibility of the registry.

Methods

collections

View source

collections(
    filter: Optional[Dict[str, Any]] = None
) -> Collections

Returns the collections belonging to the registry.

create

View source

@classmethod
create(
    client: "Client",
    organization: str,
    name: str,
    visibility: Literal['organization', 'restricted'],
    description: Optional[str] = None,
    artifact_types: Optional[List[str]] = None
)

Create a new registry.

The registry name must be unique within the organization. This function should be called using api.create_registry()

Args
client The GraphQL client.
organization The name of the organization.
name The name of the registry (without the wandb-registry- prefix).
visibility The visibility level (‘organization’ or ‘restricted’).
description An optional description for the registry.
artifact_types An optional list of allowed artifact types.
Returns
Registry The newly created Registry object.
Raises
ValueError If a registry with the same name already exists in the organization or if the creation fails.

delete

View source

delete() -> None

Delete the registry. This is irreversible.

load

View source

load() -> None

Load the registry attributes from the backend to reflect the latest saved state.

save

View source

save() -> None

Save registry attributes to the backend.

versions

View source

versions(
    filter: Optional[Dict[str, Any]] = None
) -> Versions

Returns the versions belonging to the registry.

4.8.9 - Run

A single run associated with an entity and project.

Run(
    client: "RetryingClient",
    entity: str,
    project: str,
    run_id: str,
    attrs: Optional[Mapping] = None,
    include_sweeps: bool = (True)
)
Attributes

Methods

create

View source

@classmethod
create(
    api, run_id=None, project=None, entity=None
)

Create a run for the given project.

delete

View source

delete(
    delete_artifacts=(False)
)

Delete the given run from the wandb backend.

display

View source

display(
    height=420, hidden=(False)
) -> bool

Display this object in jupyter.

file

View source

file(
    name
)

Return the path of a file with a given name in the artifact.

Args
name (str): name of requested file.
Returns
A File matching the name argument.

files

View source

files(
    names=None, per_page=50
)

Return a file path for each file named.

Args
names (list): names of the requested files, if empty returns all files per_page (int): number of results per page.
Returns
A Files object, which is an iterator over File objects.

history

View source

history(
    samples=500, keys=None, x_axis="_step", pandas=(True), stream="default"
)

Return sampled history metrics for a run.

This is simpler and faster if you are ok with the history records being sampled.

Args
samples (int, optional) The number of samples to return
pandas (bool, optional) Return a pandas dataframe
keys (list, optional) Only return metrics for specific keys
x_axis (str, optional) Use this metric as the xAxis defaults to _step
stream (str, optional) “default” for metrics, “system” for machine metrics
Returns
pandas.DataFrame If pandas=True returns a pandas.DataFrame of history metrics. list of dicts: If pandas=False returns a list of dicts of history metrics.

load

View source

load(
    force=(False)
)

log_artifact

View source

log_artifact(
    artifact: "wandb.Artifact",
    aliases: Optional[Collection[str]] = None,
    tags: Optional[Collection[str]] = None
)

Declare an artifact as output of a run.

Args
artifact (Artifact): An artifact returned from wandb.Api().artifact(name). aliases (list, optional): Aliases to apply to this artifact.
tags (list, optional) Tags to apply to this artifact, if any.
Returns
A Artifact object.

logged_artifacts

View source

logged_artifacts(
    per_page: int = 100
) -> public.RunArtifacts

Fetches all artifacts logged by this run.

Retrieves all output artifacts that were logged during the run. Returns a paginated result that can be iterated over or collected into a single list.

Args
per_page Number of artifacts to fetch per API request.
Returns
An iterable collection of all Artifact objects logged as outputs during this run.

Example:

>>> import wandb
>>> import tempfile
>>> with tempfile.NamedTemporaryFile(
...     mode="w", delete=False, suffix=".txt"
... ) as tmp:
...     tmp.write("This is a test artifact")
...     tmp_path = tmp.name
>>> run = wandb.init(project="artifact-example")
>>> artifact = wandb.Artifact("test_artifact", type="dataset")
>>> artifact.add_file(tmp_path)
>>> run.log_artifact(artifact)
>>> run.finish()
>>> api = wandb.Api()
>>> finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
>>> for logged_artifact in finished_run.logged_artifacts():
...     print(logged_artifact.name)
test_artifact

save

View source

save()

scan_history

View source

scan_history(
    keys=None, page_size=1000, min_step=None, max_step=None
)

Returns an iterable collection of all history records for a run.

Example:

Export all the loss values for an example run

run = api.run("l2k2/examples-numpy-boston/i0wt6xua")
history = run.scan_history(keys=["Loss"])
losses = [row["Loss"] for row in history]
Args
keys ([str], optional): only fetch these keys, and only fetch rows that have all of keys defined. page_size (int, optional): size of pages to fetch from the api. min_step (int, optional): the minimum number of pages to scan at a time. max_step (int, optional): the maximum number of pages to scan at a time.
Returns
An iterable collection over history records (dict).

snake_to_camel

View source

snake_to_camel(
    string
)

to_html

View source

to_html(
    height=420, hidden=(False)
)

Generate HTML containing an iframe displaying this run.

update

View source

update()

Persist changes to the run object to the wandb backend.

upload_file

View source

upload_file(
    path, root="."
)

Upload a file.

Args
path (str): name of file to upload. root (str): the root path to save the file relative to. i.e. If you want to have the file saved in the run as “my_dir/file.txt” and you’re currently in “my_dir” you would set root to “../”.
Returns
A File matching the name argument.

use_artifact

View source

use_artifact(
    artifact, use_as=None
)

Declare an artifact as an input to a run.

Args
artifact (Artifact): An artifact returned from wandb.Api().artifact(name) use_as (string, optional): A string identifying how the artifact is used in the script. Used to easily differentiate artifacts used in a run, when using the beta wandb launch feature’s artifact swapping functionality.
Returns
A Artifact object.

used_artifacts

View source

used_artifacts(
    per_page: int = 100
) -> public.RunArtifacts

Fetches artifacts explicitly used by this run.

Retrieves only the input artifacts that were explicitly declared as used during the run, typically via run.use_artifact(). Returns a paginated result that can be iterated over or collected into a single list.

Args
per_page Number of artifacts to fetch per API request.
Returns
An iterable collection of Artifact objects explicitly used as inputs in this run.

Example:

>>> import wandb
>>> run = wandb.init(project="artifact-example")
>>> run.use_artifact("test_artifact:latest")
>>> run.finish()
>>> api = wandb.Api()
>>> finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
>>> for used_artifact in finished_run.used_artifacts():
...     print(used_artifact.name)
test_artifact

wait_until_finished

View source

wait_until_finished()

4.8.10 - RunQueue

RunQueue(
    client: "RetryingClient",
    name: str,
    entity: str,
    prioritization_mode: Optional[RunQueuePrioritizationMode] = None,
    _access: Optional[RunQueueAccessType] = None,
    _default_resource_config_id: Optional[int] = None,
    _default_resource_config: Optional[dict] = None
) -> None
Attributes
items Up to the first 100 queued runs. Modifying this list will not modify the queue or any enqueued items!

Methods

create

View source

@classmethod
create(
    name: str,
    resource: "RunQueueResourceType",
    entity: Optional[str] = None,
    prioritization_mode: Optional['RunQueuePrioritizationMode'] = None,
    config: Optional[dict] = None,
    template_variables: Optional[dict] = None
) -> "RunQueue"

delete

View source

delete()

Delete the run queue from the wandb backend.

4.8.11 - Runs

An iterable collection of runs associated with a project and optional filter.

Runs(
    client: "RetryingClient",
    entity: str,
    project: str,
    filters: Optional[Dict[str, Any]] = None,
    order: Optional[str] = None,
    per_page: int = 50,
    include_sweeps: bool = (True)
)

This is generally used indirectly via the Api.runs method.

Attributes
cursor The start cursor to use for the next fetched page.
more Whether there are more pages to be fetched.

Methods

convert_objects

View source

convert_objects()

Convert the last fetched response data into the iterated objects.

histories

View source

histories(
    samples: int = 500,
    keys: Optional[List[str]] = None,
    x_axis: str = "_step",
    format: Literal['default', 'pandas', 'polars'] = "default",
    stream: Literal['default', 'system'] = "default"
)

Return sampled history metrics for all runs that fit the filters conditions.

Args
samples (int, optional) The number of samples to return per run
keys (list[str], optional) Only return metrics for specific keys
x_axis (str, optional) Use this metric as the xAxis defaults to _step
format (Literal, optional) Format to return data in, options are “default”, “pandas”, “polars”
stream (Literal, optional) “default” for metrics, “system” for machine metrics
Returns
pandas.DataFrame If format=“pandas”, returns a pandas.DataFrame of history metrics.
polars.DataFrame If format=“polars”, returns a polars.DataFrame of history metrics. list of dicts: If format=“default”, returns a list of dicts containing history metrics with a run_id key.

next

View source

next() -> T

Return the next item from the iterator. When exhausted, raise StopIteration

update_variables

View source

update_variables() -> None

Update the query variables for the next page fetch.

__getitem__

View source

__getitem__(
    index: (int | slice)
) -> (T | list[T])

__iter__

View source

__iter__() -> Iterator[T]

__len__

View source

__len__() -> int
Class Variables
QUERY None

4.8.12 - Sweep

A set of runs associated with a sweep.

Sweep(
    client, entity, project, sweep_id, attrs=None
)

Examples:

Instantiate with:

api = wandb.Api()
sweep = api.sweep(path / to / sweep)
Attributes
runs (Runs) list of runs
id (str) sweep id
project (str) name of project
config (str) dictionary of sweep configuration
state (str) the state of the sweep
expected_run_count (int) number of expected runs for the sweep

Methods

best_run

View source

best_run(
    order=None
)

Return the best run sorted by the metric defined in config or the order passed in.

display

View source

display(
    height=420, hidden=(False)
) -> bool

Display this object in jupyter.

get

View source

@classmethod
get(
    client, entity=None, project=None, sid=None, order=None, query=None, **kwargs
)

Execute a query against the cloud backend.

load

View source

load(
    force: bool = (False)
)

snake_to_camel

View source

snake_to_camel(
    string
)

to_html

View source

to_html(
    height=420, hidden=(False)
)

Generate HTML containing an iframe displaying this sweep.

Class Variables
LEGACY_QUERY
QUERY

4.9 - init

Start a new run to track and log to W&B.

init(
    entity: (str | None) = None,
    project: (str | None) = None,
    dir: (StrPath | None) = None,
    id: (str | None) = None,
    name: (str | None) = None,
    notes: (str | None) = None,
    tags: (Sequence[str] | None) = None,
    config: (dict[str, Any] | str | None) = None,
    config_exclude_keys: (list[str] | None) = None,
    config_include_keys: (list[str] | None) = None,
    allow_val_change: (bool | None) = None,
    group: (str | None) = None,
    job_type: (str | None) = None,
    mode: (Literal['online', 'offline', 'disabled'] | None) = None,
    force: (bool | None) = None,
    anonymous: (Literal['never', 'allow', 'must'] | None) = None,
    reinit: (bool | Literal[None, 'default', 'return_previous', 'finish_previous',
        'create_new']) = None,
    resume: (bool | Literal['allow', 'never', 'must', 'auto'] | None) = None,
    resume_from: (str | None) = None,
    fork_from: (str | None) = None,
    save_code: (bool | None) = None,
    tensorboard: (bool | None) = None,
    sync_tensorboard: (bool | None) = None,
    monitor_gym: (bool | None) = None,
    settings: (Settings | dict[str, Any] | None) = None
) -> Run

In an ML training pipeline, you could add wandb.init() to the beginning of your training script as well as your evaluation script, and each piece would be tracked as a run in W&B.

wandb.init() spawns a new background process to log data to a run, and it also syncs data to https://wandb.ai by default, so you can see your results in real-time.

Call wandb.init() to start a run before logging data with wandb.log(). When you’re done logging data, call wandb.finish() to end the run. If you don’t call wandb.finish(), the run will end when your script exits.

For more on using wandb.init(), including detailed examples, check out our guide and FAQs.

Examples:

Explicitly set the entity and project and choose a name for the run:

import wandb

run = wandb.init(
    entity="geoff",
    project="capsules",
    name="experiment-2021-10-31",
)

# ... your training code here ...

run.finish()

Add metadata about the run using the config argument:

import wandb

config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
    run.config.update({"architecture": "resnet", "depth": 34})

    # ... your training code here ...

Note that you can use wandb.init() as a context manager to automatically call wandb.finish() at the end of the block.

Args
entity The username or team name under which the runs will be logged. The entity must already exist, so ensure you’ve created your account or team in the UI before starting to log runs. If not specified, the run will default your default entity. To change the default entity, go to your settings and update the “Default location to create new projects” under “Default team”.
project The name of the project under which this run will be logged. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can’t infer the project name, the project will default to "uncategorized".
dir The absolute path to the directory where experiment logs and metadata files are stored. If not specified, this defaults to the ./wandb directory. Note that this does not affect the location where artifacts are stored when calling download().
id A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. The identifier must not contain any of the following special characters: / \ # ? % :. For a short descriptive name, use the name field, or for saving hyperparameters to compare across runs, use config.
name A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using the config field.
notes A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future.
tags A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like “baseline” or “production.” You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, use run.tags += ["new_tag"] after calling run = wandb.init().
config Sets wandb.config, a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (.), and values should be smaller than 10 MB. If a dictionary, argparse.Namespace, or absl.flags.FLAGS is provided, the key-value pairs will be loaded directly into wandb.config. If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded into wandb.config.
config_exclude_keys A list of specific keys to exclude from wandb.config.
config_include_keys A list of specific keys to include in wandb.config.
allow_val_change Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider using wandb.log() instead. By default, this is False in scripts and True in Notebook environments.
group Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment. For more information, refer to our guide to grouping runs.
job_type Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as “train” and “eval”. Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons.
mode Specifies how run data is managed, with the following options: - "online" (default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations. - "offline": Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing. - "disabled": Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations.
force Determines if a W&B login is required to run the script. If True, the user must be logged in to W&B; otherwise, the script will not proceed. If False (default), the script can proceed without a login, switching to offline mode if the user is not logged in.
anonymous Specifies the level of control over anonymous data logging. Available options are: - "never" (default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account. - "allow": Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI. - "must": Forces the run to be logged to an anonymous account, even if the user is logged in.
reinit Shorthand for the “reinit” setting. Determines the behavior of wandb.init() when a run is active.
resume Controls the behavior when resuming a run with the specified id. Available options are: - "allow": If a run with the specified id exists, it will resume from the last step; otherwise, a new run will be created. - "never": If a run with the specified id exists, an error will be raised. If no such run is found, a new run will be created. - "must": If a run with the specified id exists, it will resume from the last step. If no run is found, an error will be raised. - "auto": Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run. - True: Deprecated. Use "auto" instead. - False: Deprecated. Use the default behavior (leaving resume unset) to always start a new run. Note: If resume is set, fork_from and resume_from cannot be used. When resume is unset, the system will always start a new run. For more details, see our guide to resuming runs.
resume_from Specifies a moment in a previous run to resume a run from, using the format {run_id}?_step={step}. This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If an id argument is also provided, the resume_from argument will take precedence. resume, resume_from and fork_from cannot be used together, only one of them can be used at a time. Note: This feature is in beta and may change in the future.
fork_from Specifies a point in a previous run from which to fork a new run, using the format {id}?_step={step}. This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If an id argument is also provided, it must be different from the fork_from argument, an error will be raised if they are the same. resume, resume_from and fork_from cannot be used together, only one of them can be used at a time. Note: This feature is in beta and may change in the future.
save_code Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to enable on your settings page.
tensorboard Deprecated. Use sync_tensorboard instead.
sync_tensorboard Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI. saving relevant event files for viewing in the W&B UI. (Default: False)
monitor_gym Enables automatic logging of videos of the environment when using OpenAI Gym. For additional details, see our guide for gym integration.
settings Specifies a dictionary or wandb.Settings object with advanced settings for the run.
Returns
A Run object, which is a handle to the current run. Use this object to perform operations like logging data, saving files, and finishing the run. See the Run API for more details.
Raises
Error If some unknown or internal error happened during the run initialization.
AuthenticationError If the user failed to provide valid credentials.
CommError If there was a problem communicating with the W&B server.
UsageError If the user provided invalid arguments to the function.
KeyboardInterrupt If the user interrupts the run initialization process. If the user interrupts the run initialization process.

4.10 - Integrations

Modules

keras module: Tools for integrating wandb with Keras.

4.10.1 - keras

Tools for integrating wandb with Keras.

Classes

class WandbCallback: WandbCallback automatically integrates keras with wandb.

class WandbEvalCallback: Abstract base class to build Keras callbacks for model prediction visualization.

class WandbMetricsLogger: Logger that sends system metrics to W&B.

class WandbModelCheckpoint: A checkpoint that periodically saves a Keras model or model weights.

4.10.1.1 - WandbCallback

WandbCallback automatically integrates keras with wandb.

WandbCallback(
    monitor="val_loss", verbose=0, mode="auto", save_weights_only=(False),
    log_weights=(False), log_gradients=(False), save_model=(True),
    training_data=None, validation_data=None, labels=None, predictions=36,
    generator=None, input_type=None, output_type=None, log_evaluation=(False),
    validation_steps=None, class_colors=None, log_batch_frequency=None,
    log_best_prefix="best_", save_graph=(True), validation_indexes=None,
    validation_row_processor=None, prediction_row_processor=None,
    infer_missing_processors=(True), log_evaluation_frequency=0,
    compute_flops=(False), **kwargs
)

Example:

model.fit(
    X_train,
    y_train,
    validation_data=(X_test, y_test),
    callbacks=[WandbCallback()],
)

WandbCallback will automatically log history data from any metrics collected by keras: loss and anything passed into keras_model.compile().

WandbCallback will set summary metrics for the run associated with the “best” training step, where “best” is defined by the monitor and mode attributes. This defaults to the epoch with the minimum val_loss. WandbCallback will by default save the model associated with the best epoch.

WandbCallback can optionally log gradient and parameter histograms.

WandbCallback can optionally save training and validation data for wandb to visualize.

Args
monitor (str) name of metric to monitor. Defaults to val_loss.
mode (str) one of {auto, min, max}. min - save model when monitor is minimized max - save model when monitor is maximized auto - try to guess when to save the model (default).
save_model True - save a model when monitor beats all previous epochs False - don’t save models
save_graph (boolean) if True save model graph to wandb (default to True).
save_weights_only (boolean) if True, then only the model’s weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)).
log_weights (boolean) if True save histograms of the model’s layer’s weights.
log_gradients (boolean) if True log histograms of the training gradients
training_data (tuple) Same format (X,y) as passed to model.fit. This is needed for calculating gradients - this is mandatory if log_gradients is True.
validation_data (tuple) Same format (X,y) as passed to model.fit. A set of data for wandb to visualize. If this is set, every epoch, wandb will make a small number of predictions and save the results for later visualization. In case you are working with image data, please also set input_type and output_type in order to log correctly.
generator (generator) a generator that returns validation data for wandb to visualize. This generator should return tuples (X,y). Either validate_data or generator should be set for wandb to visualize specific data examples. In case you are working with image data, please also set input_type and output_type in order to log correctly.
validation_steps (int) if validation_data is a generator, how many steps to run the generator for the full validation set.
labels (list) If you are visualizing your data with wandb this list of labels will convert numeric output to understandable string if you are building a multiclass classifier. If you are making a binary classifier you can pass in a list of two labels [“label for false”, “label for true”]. If validate_data and generator are both false, this won’t do anything.
predictions (int) the number of predictions to make for visualization each epoch, max is 100.
input_type (string) type of the model input to help visualization. can be one of: (image, images, segmentation_mask, auto).
output_type (string) type of the model output to help visualization. can be one of: (image, images, segmentation_mask, label).
log_evaluation (boolean) if True, save a Table containing validation data and the model’s predictions at each epoch. See validation_indexes, validation_row_processor, and output_row_processor for additional details.
class_colors ([float, float, float]) if the input or output is a segmentation mask, an array containing an rgb tuple (range 0-1) for each class.
log_batch_frequency (integer) if None, callback will log every epoch. If set to integer, callback will log training metrics every log_batch_frequency batches.
log_best_prefix (string) if None, no extra summary metrics will be saved. If set to a string, the monitored metric and epoch will be prepended with this value and stored as summary metrics.
validation_indexes ([wandb.data_types._TableLinkMixin]) an ordered list of index keys to associate with each validation example. If log_evaluation is True and validation_indexes is provided, then a Table of validation data will not be created and instead each prediction will be associated with the row represented by the TableLinkMixin. The most common way to obtain such keys are is use Table.get_index() which will return a list of row keys.
validation_row_processor (Callable) a function to apply to the validation data, commonly used to visualize the data. The function will receive an ndx (int) and a row (dict). If your model has a single input, then row["input"] will be the input data for the row. Else, it will be keyed based on the name of the input slot. If your fit function takes a single target, then row["target"] will be the target data for the row. Else, it will be keyed based on the name of the output slots. For example, if your input data is a single ndarray, but you wish to visualize the data as an Image, then you can provide lambda ndx, row: {"img": wandb.Image(row["input"])} as the processor. Ignored if log_evaluation is False or validation_indexes are present.
output_row_processor (Callable) same as validation_row_processor, but applied to the model’s output. row["output"] will contain the results of the model output.
infer_missing_processors (bool) Determines if validation_row_processor and output_row_processor should be inferred if missing. Defaults to True. If labels are provided, we will attempt to infer classification-type processors where appropriate.
log_evaluation_frequency (int) Determines the frequency which evaluation results will be logged. Default 0 (only at the end of training). Set to 1 to log every epoch, 2 to log every other epoch, and so on. Has no effect when log_evaluation is False.
compute_flops (bool) Compute the FLOPs of your Keras Sequential or Functional model in GigaFLOPs unit.

Methods

get_flops

View source

get_flops() -> float

Calculate FLOPS [GFLOPs] for a tf.keras.Model or tf.keras.Sequential model in inference mode.

It uses tf.compat.v1.profiler under the hood.

set_model

View source

set_model(
    model
)

set_params

View source

set_params(
    params
)

4.10.1.2 - WandbEvalCallback

Abstract base class to build Keras callbacks for model prediction visualization.

WandbEvalCallback(
    data_table_columns: List[str],
    pred_table_columns: List[str],
    *args,
    **kwargs
) -> None

You can build callbacks for visualizing model predictions on_epoch_end that can be passed to model.fit() for classification, object detection, segmentation, etc. tasks.

To use this, inherit from this base callback class and implement the add_ground_truth and add_model_prediction methods.

The base class will take care of the following:

  • Initialize data_table for logging the ground truth and pred_table for predictions.
  • The data uploaded to data_table is used as a reference for the pred_table. This is to reduce the memory footprint. The data_table_ref is a list that can be used to access the referenced data. Check out the example below to see how it’s done.
  • Log the tables to W&B as W&B Artifacts.
  • Each new pred_table is logged as a new version with aliases.

Example:

class WandbClfEvalCallback(WandbEvalCallback):
    def __init__(self, validation_data, data_table_columns, pred_table_columns):
        super().__init__(data_table_columns, pred_table_columns)

        self.x = validation_data[0]
        self.y = validation_data[1]

    def add_ground_truth(self):
        for idx, (image, label) in enumerate(zip(self.x, self.y)):
            self.data_table.add_data(idx, wandb.Image(image), label)

    def add_model_predictions(self, epoch):
        preds = self.model.predict(self.x, verbose=0)
        preds = tf.argmax(preds, axis=-1)

        data_table_ref = self.data_table_ref
        table_idxs = data_table_ref.get_index()

        for idx in table_idxs:
            pred = preds[idx]
            self.pred_table.add_data(
                epoch,
                data_table_ref.data[idx][0],
                data_table_ref.data[idx][1],
                data_table_ref.data[idx][2],
                pred,
            )


model.fit(
    x,
    y,
    epochs=2,
    validation_data=(x, y),
    callbacks=[
        WandbClfEvalCallback(
            validation_data=(x, y),
            data_table_columns=["idx", "image", "label"],
            pred_table_columns=["epoch", "idx", "image", "label", "pred"],
        )
    ],
)

To have more fine-grained control, you can override the on_train_begin and on_epoch_end methods. If you want to log the samples after N batched, you can implement on_train_batch_end method.

Methods

add_ground_truth

View source

@abc.abstractmethod
add_ground_truth(
    logs: Optional[Dict[str, float]] = None
) -> None

Add ground truth data to data_table.

Use this method to write the logic for adding validation/training data to data_table initialized using init_data_table method.

Example:

for idx, data in enumerate(dataloader):
    self.data_table.add_data(idx, data)

This method is called once on_train_begin or equivalent hook.

add_model_predictions

View source

@abc.abstractmethod
add_model_predictions(
    epoch: int,
    logs: Optional[Dict[str, float]] = None
) -> None

Add a prediction from a model to pred_table.

Use this method to write the logic for adding model prediction for validation/ training data to pred_table initialized using init_pred_table method.

Example:

# Assuming the dataloader is not shuffling the samples.
for idx, data in enumerate(dataloader):
    preds = model.predict(data)
    self.pred_table.add_data(
        self.data_table_ref.data[idx][0],
        self.data_table_ref.data[idx][1],
        preds,
    )

This method is called on_epoch_end or equivalent hook.

init_data_table

View source

init_data_table(
    column_names: List[str]
) -> None

Initialize the W&B Tables for validation data.

Call this method on_train_begin or equivalent hook. This is followed by adding data to the table row or column wise.

Args
column_names (list) Column names for W&B Tables.

init_pred_table

View source

init_pred_table(
    column_names: List[str]
) -> None

Initialize the W&B Tables for model evaluation.

Call this method on_epoch_end or equivalent hook. This is followed by adding data to the table row or column wise.

Args
column_names (list) Column names for W&B Tables.

log_data_table

View source

log_data_table(
    name: str = "val",
    type: str = "dataset",
    table_name: str = "val_data"
) -> None

Log the data_table as W&B artifact and call use_artifact on it.

This lets the evaluation table use the reference of already uploaded data (images, text, scalar, etc.) without re-uploading.

Args
name (str) A human-readable name for this artifact, which is how you can identify this artifact in the UI or reference it in use_artifact calls. (default is ‘val’)
type (str) The type of the artifact, which is used to organize and differentiate artifacts. (default is ‘dataset’)
table_name (str) The name of the table as will be displayed in the UI. (default is ‘val_data’).

log_pred_table

View source

log_pred_table(
    type: str = "evaluation",
    table_name: str = "eval_data",
    aliases: Optional[List[str]] = None
) -> None

Log the W&B Tables for model evaluation.

The table will be logged multiple times creating new version. Use this to compare models at different intervals interactively.

Args
type (str) The type of the artifact, which is used to organize and differentiate artifacts. (default is ’evaluation’)
table_name (str) The name of the table as will be displayed in the UI. (default is ’eval_data')
aliases (List[str]) List of aliases for the prediction table.

set_model

set_model(
    model
)

set_params

set_params(
    params
)

4.10.1.3 - WandbMetricsLogger

Logger that sends system metrics to W&B.

WandbMetricsLogger(
    log_freq: Union[LogStrategy, int] = "epoch",
    initial_global_step: int = 0,
    *args,
    **kwargs
) -> None

WandbMetricsLogger automatically logs the logs dictionary that callback methods take as argument to wandb.

This callback automatically logs the following to a W&B run page:

  • system (CPU/GPU/TPU) metrics,
  • train and validation metrics defined in model.compile,
  • learning rate (both for a fixed value or a learning rate scheduler)

Notes:

If you resume training by passing initial_epoch to model.fit and you are using a learning rate scheduler, make sure to pass initial_global_step to WandbMetricsLogger. The initial_global_step is step_size * initial_step, where step_size is number of training steps per epoch. step_size can be calculated as the product of the cardinality of the training dataset and the batch size.

Args
log_freq (“epoch”, “batch”, or int) if “epoch”, logs metrics at the end of each epoch. If “batch”, logs metrics at the end of each batch. If an integer, logs metrics at the end of that many batches. Defaults to “epoch”.
initial_global_step (int) Use this argument to correctly log the learning rate when you resume training from some initial_epoch, and a learning rate scheduler is used. This can be computed as step_size * initial_step. Defaults to 0.

Methods

set_model

set_model(
    model
)

set_params

set_params(
    params
)

4.10.1.4 - WandbModelCheckpoint

A checkpoint that periodically saves a Keras model or model weights.

WandbModelCheckpoint(
    filepath: StrPath,
    monitor: str = "val_loss",
    verbose: int = 0,
    save_best_only: bool = (False),
    save_weights_only: bool = (False),
    mode: Mode = "auto",
    save_freq: Union[SaveStrategy, int] = "epoch",
    initial_value_threshold: Optional[float] = None,
    **kwargs
) -> None

Saved weights are uploaded to W&B as a wandb.Artifact.

Since this callback is subclassed from tf.keras.callbacks.ModelCheckpoint, the checkpointing logic is taken care of by the parent callback. You can learn more here: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint

This callback is to be used in conjunction with training using model.fit() to save a model or weights (in a checkpoint file) at some interval. The model checkpoints will be logged as W&B Artifacts. You can learn more here: https://docs.wandb.ai/guides/artifacts

This callback provides the following features:

  • Save the model that has achieved “best performance” based on “monitor”.
  • Save the model at the end of every epoch regardless of the performance.
  • Save the model at the end of epoch or after a fixed number of training batches.
  • Save only model weights, or save the whole model.
  • Save the model either in SavedModel format or in .h5 format.
Args
filepath (Union[str, os.PathLike]) path to save the model file. filepath can contain named formatting options, which will be filled by the value of epoch and keys in logs (passed in on_epoch_end). For example: if filepath is model-{epoch:02d}-{val_loss:.2f}, then the model checkpoints will be saved with the epoch number and the validation loss in the filename.
monitor (str) The metric name to monitor. Default to “val_loss”.
verbose (int) Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action.
save_best_only (bool) if save_best_only=True, it only saves when the model is considered the “best” and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn’t contain formatting options like {epoch} then filepath will be overwritten by each new better model locally. The model logged as an artifact will still be associated with the correct monitor. Artifacts will be uploaded continuously and versioned separately as a new best model is found.
save_weights_only (bool) if True, then only the model’s weights will be saved.
mode (Mode) one of {‘auto’, ‘min’, ‘max’}. For val_acc, this should be max, for val_loss this should be min, etc.
save_freq (Union[SaveStrategy, int]) epoch or integer. When using 'epoch', the callback saves the model after each epoch. When using an integer, the callback saves the model at end of this many batches. Note that when monitoring validation metrics such as val_acc or val_loss, save_freq must be set to “epoch” as those metrics are only available at the end of an epoch.
initial_value_threshold (Optional[float]) Floating point initial “best” value of the metric to be monitored.
Attributes

Methods

set_model

set_model(
    model
)

set_params

set_params(
    params
)

4.11 - launch-library

Classes

class LaunchAgent: Launch agent class which polls run given run queues and launches runs for wandb launch.

Functions

launch(...): Launch a W&B launch experiment.

launch_add(...): Enqueue a W&B launch experiment. With either a source uri, job or docker_image.

4.11.1 - launch

Launch a W&B launch experiment.

launch(
    api: Api,
    job: Optional[str] = None,
    entry_point: Optional[List[str]] = None,
    version: Optional[str] = None,
    name: Optional[str] = None,
    resource: Optional[str] = None,
    resource_args: Optional[Dict[str, Any]] = None,
    project: Optional[str] = None,
    entity: Optional[str] = None,
    docker_image: Optional[str] = None,
    config: Optional[Dict[str, Any]] = None,
    synchronous: Optional[bool] = (True),
    run_id: Optional[str] = None,
    repository: Optional[str] = None
) -> AbstractRun
Arguments
job string reference to a wandb.Job eg: wandb/test/my-job:latest
api An instance of a wandb Api from wandb.apis.internal.
entry_point Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.
version For Git-based projects, either a commit hash or a branch name.
name Name run under which to launch the run.
resource Execution backend for the run.
resource_args Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args.
project Target project to send launched run to
entity Target entity to send launched run to
config A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”.
synchronous Whether to block while waiting for a run to complete. Defaults to True. Note that if synchronous is False and backend is “local-container”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated. If synchronous is True and the run fails, the current process will error out as well.
run_id ID for the run (To ultimately replace the :name: field)
repository string name of repository path for remote registry

Example:

from wandb.sdk.launch import launch

job = "wandb/jobs/Hello World:latest"
params = {"epochs": 5}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch(api, job, parameters=params)
Returns
an instance ofwandb.launch.SubmittedRun exposing information (e.g. run ID) about the launched run.
Raises
wandb.exceptions.ExecutionError If a run launched in blocking mode is unsuccessful.

4.11.2 - launch_add

Enqueue a W&B launch experiment. With either a source uri, job or docker_image.

launch_add(
    uri: Optional[str] = None,
    job: Optional[str] = None,
    config: Optional[Dict[str, Any]] = None,
    template_variables: Optional[Dict[str, Union[float, int, str]]] = None,
    project: Optional[str] = None,
    entity: Optional[str] = None,
    queue_name: Optional[str] = None,
    resource: Optional[str] = None,
    entry_point: Optional[List[str]] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    docker_image: Optional[str] = None,
    project_queue: Optional[str] = None,
    resource_args: Optional[Dict[str, Any]] = None,
    run_id: Optional[str] = None,
    build: Optional[bool] = (False),
    repository: Optional[str] = None,
    sweep_id: Optional[str] = None,
    author: Optional[str] = None,
    priority: Optional[int] = None
) -> "public.QueuedRun"
Arguments
uri URI of experiment to run. A wandb run uri or a Git repository URI.
job string reference to a wandb.Job eg: wandb/test/my-job:latest
config A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”
template_variables A dictionary containing values of template variables for a run queue. Expected format of {"VAR_NAME": VAR_VALUE}
project Target project to send launched run to
entity Target entity to send launched run to
queue the name of the queue to enqueue the run to
priority the priority level of the job, where 1 is the highest priority
resource Execution backend for the run: W&B provides built-in support for “local-container” backend
entry_point Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.
name Name run under which to launch the run.
version For Git-based projects, either a commit hash or a branch name.
docker_image The name of the docker image to use for the run.
resource_args Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args.
run_id optional string indicating the id of the launched run
build optional flag defaulting to false, requires queue to be set if build, an image is created, creates a job artifact, pushes a reference to that job artifact to queue
repository optional string to control the name of the remote repository, used when pushing images to a registry
project_queue optional string to control the name of the project for the queue. Primarily used for back compatibility with project scoped queues

Example:

from wandb.sdk.launch import launch_add

project_uri = "https://github.com/wandb/examples"
params = {"alpha": 0.5, "l1_ratio": 0.01}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch_add(uri=project_uri, parameters=params)
Returns
an instance ofwandb.api.public.QueuedRun which gives information about the queued run, or if wait_until_started or wait_until_finished are called, gives access to the underlying Run information.
Raises
wandb.exceptions.LaunchError if unsuccessful

4.11.3 - LaunchAgent

Launch agent class which polls run given run queues and launches runs for wandb launch.

LaunchAgent(
    api: Api,
    config: Dict[str, Any]
)
Arguments
api Api object to use for making requests to the backend.
config Config dictionary for the agent.
Attributes
num_running_jobs Return the number of jobs not including schedulers.
num_running_schedulers Return just the number of schedulers.
thread_ids Returns a list of keys running thread ids for the agent.

Methods

check_sweep_state

View source

check_sweep_state(
    launch_spec, api
)

Check the state of a sweep before launching a run for the sweep.

fail_run_queue_item

View source

fail_run_queue_item(
    run_queue_item_id, message, phase, files=None
)

finish_thread_id

View source

finish_thread_id(
    thread_id, exception=None
)

Removes the job from our list for now.

get_job_and_queue

View source

get_job_and_queue()

initialized

View source

@classmethod
initialized() -> bool

Return whether the agent is initialized.

loop

View source

loop()

Loop infinitely to poll for jobs and run them.

Raises
KeyboardInterrupt if the agent is requested to stop.

name

View source

@classmethod
name() -> str

Return the name of the agent.

pop_from_queue

View source

pop_from_queue(
    queue
)

Pops an item off the runqueue to run as a job.

Arguments
queue Queue to pop from.
Returns
Item popped off the queue.
Raises
Exception if there is an error popping from the queue.

View source

print_status() -> None

Prints the current status of the agent.

run_job

View source

run_job(
    job, queue, file_saver
)

Set up project and run the job.

Arguments
job Job to run.

task_run_job

View source

task_run_job(
    launch_spec, job, default_config, api, job_tracker
)

update_status

View source

update_status(
    status
)

Update the status of the agent.

Arguments
status Status to update the agent to.

4.12 - log

Upload run data.

log(
    data: dict[str, Any],
    step: (int | None) = None,
    commit: (bool | None) = None,
    sync: (bool | None) = None
) -> None

Use log to log data from runs, such as scalars, images, video, histograms, plots, and tables.

See our guides to logging for live examples, code snippets, best practices, and more.

The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9}). This will save the loss and accuracy to the run’s history and update the summary values for these metrics.

Visualize logged data in the workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, e.g. in Jupyter notebooks, with our API.

Logged values don’t have to be scalars. Logging any wandb object is supported. For example run.log({"example": wandb.Image("myimage.jpg")}) will log an example image which will be displayed nicely in the W&B UI. See the reference documentation for all of the different supported types or check out our guides to logging for examples, from 3D molecular structures and segmentation masks to PR curves and histograms. You can use wandb.Table to log structured data. See our guide to logging tables for details.

The W&B UI organizes metrics with a forward slash (/) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:

run.log(
    {
        "train/accuracy": 0.9,
        "train/loss": 30,
        "validate/accuracy": 0.8,
        "validate/loss": 20,
    }
)

Only one level of nesting is supported; run.log({"a/b/c": 1}) produces a section named “a/b”.

run.log is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.

The W&B step

With basic usage, each call to log creates a new “step”. The step must always increase, and it is not possible to log to a previous step.

Note that you can use any metric as the X axis in charts. In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.

# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})

See also define_metric.

It is possible to use multiple log invocations to log to the same step with the step and commit parameters. The following are all equivalent:

# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})

# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})

# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args
data A dict with str keys and values that are serializable Python objects including: int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.
step The step number to log. If None, then an implicit auto-incrementing step is used. See the notes in the description.
commit If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None, then the default is commit=True; otherwise, the default is commit=False.
sync This argument is deprecated and does nothing.

Examples:

For more and more detailed examples, see our guides to logging.

Basic usage

import wandb

run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})

Incremental logging

import wandb

run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})

Histogram

import numpy as np
import wandb

# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})

Image from numpy

import numpy as np
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
    image = wandb.Image(pixels, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Image from PIL

import numpy as np
from PIL import Image as PILImage
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(
        low=0,
        high=256,
        size=(100, 100, 3),
        dtype=np.uint8,
    )
    pil_image = PILImage.fromarray(pixels, mode="RGB")
    image = wandb.Image(pil_image, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Video from numpy

import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
    low=0,
    high=256,
    size=(10, 3, 100, 100),
    dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})

Matplotlib Plot

from matplotlib import pyplot as plt
import numpy as np
import wandb

run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y)  # plot y = x^2
run.log({"chart": fig})

PR Curve

import wandb

run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})

3D Object

import wandb

run = wandb.init()
run.log(
    {
        "generated_samples": [
            wandb.Object3D(open("sample.obj")),
            wandb.Object3D(open("sample.gltf")),
            wandb.Object3D(open("sample.glb")),
        ]
    }
)
Raises
wandb.Error if called before wandb.init
ValueError if invalid data is passed

4.13 - login

Set up W&B login credentials.

login(
    anonymous: Optional[Literal['must', 'allow', 'never']] = None,
    key: Optional[str] = None,
    relogin: Optional[bool] = None,
    host: Optional[str] = None,
    force: Optional[bool] = None,
    timeout: Optional[int] = None,
    verify: bool = (False),
    referrer: Optional[str] = None
) -> bool

By default, this will only store credentials locally without verifying them with the W&B server. To verify credentials, pass verify=True.

Args
anonymous (string, optional) Can be “must”, “allow”, or “never”. If set to “must”, always log a user in anonymously. If set to “allow”, only create an anonymous user if the user isn’t already logged in. If set to “never”, never log a user anonymously. Default set to “never”.
key (string, optional) The API key to use.
relogin (bool, optional) If true, will re-prompt for API key.
host (string, optional) The host to connect to.
force (bool, optional) If true, will force a relogin.
timeout (int, optional) Number of seconds to wait for user input.
verify (bool) Verify the credentials with the W&B server.
referrer (string, optional) The referrer to use in the URL login request.
Returns
bool if key is configured
Raises
AuthenticationError - if api_key fails verification with the server UsageError - if api_key cannot be configured and no tty

4.14 - Run

A unit of computation logged by wandb. Typically, this is an ML experiment.

Run(
    settings: Settings,
    config: (dict[str, Any] | None) = None,
    sweep_config: (dict[str, Any] | None) = None,
    launch_config: (dict[str, Any] | None) = None
) -> None

Create a run with wandb.init():

import wandb

run = wandb.init()

There is only ever at most one active wandb.Run in any process, and it is accessible as wandb.run:

import wandb

assert wandb.run is None

wandb.init()

assert wandb.run is not None

anything you log with wandb.log will be sent to that run.

If you want to start more runs in the same script or notebook, you’ll need to finish the run that is in-flight. Runs can be finished with wandb.finish or by using them in a with block:

import wandb

wandb.init()
wandb.finish()

assert wandb.run is None

with wandb.init() as run:
    pass  # log data here

assert wandb.run is None

See the documentation for wandb.init for more on creating runs, or check out our guide to wandb.init.

In distributed training, you can either create a single run in the rank 0 process and then log information only from that process, or you can create a run in each process, logging from each separately, and group the results together with the group argument to wandb.init. For more details on distributed training with W&B, check out our guide.

Currently, there is a parallel Run object in the wandb.Api. Eventually these two objects will be merged.

Attributes
summary (Summary) Single values set for each wandb.log() key. By default, summary is set to the last value logged. You can manually set summary to the best value, like max accuracy, instead of the final value.
config Config object associated with this run.
dir The directory where files associated with the run are saved.
entity The name of the W&B entity associated with the run. Entity can be a username or the name of a team or organization.
group Name of the group associated with the run. Setting a group helps the W&B UI organize runs in a sensible way. If you are doing a distributed training you should give all of the runs in the training the same group. If you are doing cross-validation you should give all the cross-validation folds the same group.
id Identifier for this run.
mode For compatibility with 0.9.x and earlier, deprecate eventually.
name Display name of the run. Display names are not guaranteed to be unique and may be descriptive. By default, they are randomly generated.
notes Notes associated with the run, if there are any. Notes can be a multiline string and can also use markdown and latex equations inside $$, like $x + 3$.
path Path to the run. Run paths include entity, project, and run ID, in the format entity/project/run_id.
project Name of the W&B project associated with the run.
project_url URL of the W&B project associated with the run, if there is one. Offline runs do not have a project URL.
resumed True if the run was resumed, False otherwise.
settings A frozen copy of run’s Settings object.
start_time Unix timestamp (in seconds) of when the run started.
starting_step The first step of the run.
step Current value of the step. This counter is incremented by wandb.log.
sweep_id Identifier for the sweep associated with the run, if there is one.
sweep_url URL of the sweep associated with the run, if there is one. Offline runs do not have a sweep URL.
tags Tags associated with the run, if there are any.
url The url for the W&B run, if there is one. Offline runs will not have a url.

Methods

alert

View source

alert(
    title: str,
    text: str,
    level: (str | AlertLevel | None) = None,
    wait_duration: (int | float | timedelta | None) = None
) -> None

Launch an alert with the given title and text.

Args
title (str) The title of the alert, must be less than 64 characters long.
text (str) The text body of the alert.
level (str or AlertLevel, optional) The alert level to use, either: INFO, WARN, or ERROR.
wait_duration (int, float, or timedelta, optional) The time to wait (in seconds) before sending another alert with this title.

define_metric

View source

define_metric(
    name: str,
    step_metric: (str | wandb_metric.Metric | None) = None,
    step_sync: (bool | None) = None,
    hidden: (bool | None) = None,
    summary: (str | None) = None,
    goal: (str | None) = None,
    overwrite: (bool | None) = None
) -> wandb_metric.Metric

Customize metrics logged with wandb.log().

Args
name The name of the metric to customize.
step_metric The name of another metric to serve as the X-axis for this metric in automatically generated charts.
step_sync Automatically insert the last value of step_metric into run.log() if it is not provided explicitly. Defaults to True if step_metric is specified.
hidden Hide this metric from automatic plots.
summary Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.
goal Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.
overwrite If false, then this call is merged with previous define_metric calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.
Returns
An object that represents this call but can otherwise be discarded.

detach

View source

detach() -> None

display

View source

display(
    height: int = 420,
    hidden: bool = (False)
) -> bool

Display this run in jupyter.

finish

View source

finish(
    exit_code: (int | None) = None,
    quiet: (bool | None) = None
) -> None

Finish a run and upload any remaining data.

Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.

Run States:

  • Running: Active run that is logging data and/or sending heartbeats.
  • Crashed: Run that stopped sending heartbeats unexpectedly.
  • Finished: Run completed successfully (exit_code=0) with all data synced.
  • Failed: Run completed with errors (exit_code!=0).
Args
exit_code Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.
quiet Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).

finish_artifact

View source

finish_artifact(
    artifact_or_path: (Artifact | str),
    name: (str | None) = None,
    type: (str | None) = None,
    aliases: (list[str] | None) = None,
    distributed_id: (str | None) = None
) -> Artifact

Finishes a non-finalized artifact as output of a run.

Subsequent “upserts” with the same distributed ID will result in a new version.

Args
artifact_or_path (str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
name (str, optional) An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
type (str) The type of artifact to log, examples include dataset, model
aliases (list, optional) Aliases to apply to this artifact, defaults to ["latest"]
distributed_id (string, optional) Unique string that all distributed jobs share. If None, defaults to the run’s group name.
Returns
An Artifact object.

get_project_url

View source

get_project_url() -> (str | None)

URL of the W&B project associated with the run, if there is one.

Offline runs do not have a project URL.

Note: this method is deprecated and will be removed in a future release. Please use run.project_url instead.

get_sweep_url

View source

get_sweep_url() -> (str | None)

The URL of the sweep associated with the run, if there is one.

Offline runs do not have a sweep URL.

Note: this method is deprecated and will be removed in a future release. Please use run.sweep_url instead.

get_url

View source

get_url() -> (str | None)

URL of the W&B run, if there is one.

Offline runs do not have a URL.

Note: this method is deprecated and will be removed in a future release. Please use run.url instead.

join

View source

join(
    exit_code: (int | None) = None
) -> None

Deprecated alias for finish() - use finish instead.

View source

link_artifact(
    artifact: Artifact,
    target_path: str,
    aliases: (list[str] | None) = None
) -> (Artifact | None)

Link the given artifact to a portfolio (a promoted collection of artifacts).

The linked artifact will be visible in the UI for the specified portfolio.

Args
artifact the (public or local) artifact which will be linked
target_path str - takes the following forms: {portfolio}, {project}/{portfolio}, or {entity}/{project}/{portfolio}
aliases List[str] - optional alias(es) that will only be applied on this linked artifact inside the portfolio. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Returns
The linked artifact if linking was successful, otherwise None.

View source

link_model(
    path: StrPath,
    registered_model_name: str,
    name: (str | None) = None,
    aliases: (list[str] | None) = None
) -> (Artifact | None)

Log a model artifact version and link it to a registered model in the model registry.

The linked model version will be visible in the UI for the specified registered model.

Steps:

  • Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’ of type ‘model’.
  • Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project. If not, create a new registered model with name ‘registered_model_name’.
  • Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
  • Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args
path (str) A path to the contents of this model, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path
registered_model_name (str) - the name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run
name (str, optional) - the name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.
aliases (List[str], optional) - alias(es) that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.

Examples:

run.link_model(
    path="/local/directory",
    registered_model_name="my_reg_model",
    name="my_model_artifact",
    aliases=["production"],
)

Invalid usage

run.link_model(
    path="/local/directory",
    registered_model_name="my_entity/my_project/my_reg_model",
    name="my_model_artifact",
    aliases=["production"],
)

run.link_model(
    path="/local/directory",
    registered_model_name="my_reg_model",
    name="my_entity/my_project/my_model_artifact",
    aliases=["production"],
)
Raises
AssertionError if registered_model_name is a path or if model artifact ’name’ is of a type that does not contain the substring ‘model’
ValueError if name has invalid special characters
Returns
The linked artifact if linking was successful, otherwise None.

log

View source

log(
    data: dict[str, Any],
    step: (int | None) = None,
    commit: (bool | None) = None,
    sync: (bool | None) = None
) -> None

Upload run data.

Use log to log data from runs, such as scalars, images, video, histograms, plots, and tables.

See our guides to logging for live examples, code snippets, best practices, and more.

The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9}). This will save the loss and accuracy to the run’s history and update the summary values for these metrics.

Visualize logged data in the workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, e.g. in Jupyter notebooks, with our API.

Logged values don’t have to be scalars. Logging any wandb object is supported. For example run.log({"example": wandb.Image("myimage.jpg")}) will log an example image which will be displayed nicely in the W&B UI. See the reference documentation for all of the different supported types or check out our guides to logging for examples, from 3D molecular structures and segmentation masks to PR curves and histograms. You can use wandb.Table to log structured data. See our guide to logging tables for details.

The W&B UI organizes metrics with a forward slash (/) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:

run.log(
    {
        "train/accuracy": 0.9,
        "train/loss": 30,
        "validate/accuracy": 0.8,
        "validate/loss": 20,
    }
)

Only one level of nesting is supported; run.log({"a/b/c": 1}) produces a section named “a/b”.

run.log is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.

The W&B step

With basic usage, each call to log creates a new “step”. The step must always increase, and it is not possible to log to a previous step.

Note that you can use any metric as the X axis in charts. In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.

# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})

See also define_metric.

It is possible to use multiple log invocations to log to the same step with the step and commit parameters. The following are all equivalent:

# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})

# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})

# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args
data A dict with str keys and values that are serializable Python objects including: int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.
step The step number to log. If None, then an implicit auto-incrementing step is used. See the notes in the description.
commit If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None, then the default is commit=True; otherwise, the default is commit=False.
sync This argument is deprecated and does nothing.

Examples:

For more and more detailed examples, see our guides to logging.

Basic usage

import wandb

run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})

Incremental logging

import wandb

run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})

Histogram

import numpy as np
import wandb

# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})

Image from numpy

import numpy as np
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
    image = wandb.Image(pixels, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Image from PIL

import numpy as np
from PIL import Image as PILImage
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(
        low=0,
        high=256,
        size=(100, 100, 3),
        dtype=np.uint8,
    )
    pil_image = PILImage.fromarray(pixels, mode="RGB")
    image = wandb.Image(pil_image, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Video from numpy

import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
    low=0,
    high=256,
    size=(10, 3, 100, 100),
    dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})

Matplotlib Plot

from matplotlib import pyplot as plt
import numpy as np
import wandb

run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y)  # plot y = x^2
run.log({"chart": fig})

PR Curve

import wandb

run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})

3D Object

import wandb

run = wandb.init()
run.log(
    {
        "generated_samples": [
            wandb.Object3D(open("sample.obj")),
            wandb.Object3D(open("sample.gltf")),
            wandb.Object3D(open("sample.glb")),
        ]
    }
)
Raises
wandb.Error if called before wandb.init
ValueError if invalid data is passed

log_artifact

View source

log_artifact(
    artifact_or_path: (Artifact | StrPath),
    name: (str | None) = None,
    type: (str | None) = None,
    aliases: (list[str] | None) = None,
    tags: (list[str] | None) = None
) -> Artifact

Declare an artifact as an output of a run.

Args
artifact_or_path (str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
name (str, optional) An artifact name. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
type (str) The type of artifact to log, examples include dataset, model
aliases (list, optional) Aliases to apply to this artifact, defaults to ["latest"]
tags (list, optional) Tags to apply to this artifact, if any.
Returns
An Artifact object.

log_code

View source

log_code(
    root: (str | None) = ".",
    name: (str | None) = None,
    include_fn: (Callable[[str, str], bool] | Callable[[str], bool]) = _is_py_requirements_or_dockerfile,
    exclude_fn: (Callable[[str, str], bool] | Callable[[str], bool]) = filenames.exclude_wandb_fn
) -> (Artifact | None)

Save the current state of your code to a W&B Artifact.

By default, it walks the current directory and logs all files that end with .py.

Args
root The relative (to os.getcwd()) or absolute path to recursively find code from.
name (str, optional) The name of our code artifact. By default, we’ll name the artifact source-$PROJECT_ID-$ENTRYPOINT_RELPATH. There may be scenarios where you want many runs to share the same artifact. Specifying name allows you to achieve that.
include_fn A callable that accepts a file path and (optionally) root path and returns True when it should be included and False otherwise. This defaults to: lambda path, root: path.endswith(".py")
exclude_fn A callable that accepts a file path and (optionally) root path and returns True when it should be excluded and False otherwise. This defaults to a function that excludes all files within <root>/.wandb/ and <root>/wandb/ directories.

Examples:

Basic usage

run.log_code()

Advanced usage

run.log_code(
    "../",
    include_fn=lambda path: path.endswith(".py") or path.endswith(".ipynb"),
    exclude_fn=lambda path, root: os.path.relpath(path, root).startswith(
        "cache/"
    ),
)
Returns
An Artifact object if code was logged

log_model

View source

log_model(
    path: StrPath,
    name: (str | None) = None,
    aliases: (list[str] | None) = None
) -> None

Logs a model artifact containing the contents inside the ‘path’ to a run and marks it as an output to this run.

Args
path (str) A path to the contents of this model, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path
name (str, optional) A name to assign to the model artifact that the file contents will be added to. The string must contain only the following alphanumeric characters: dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.
aliases (list, optional) Aliases to apply to the created model artifact, defaults to ["latest"]

Examples:

run.log_model(
    path="/local/directory",
    name="my_model_artifact",
    aliases=["production"],
)

Invalid usage

run.log_model(
    path="/local/directory",
    name="my_entity/my_project/my_model_artifact",
    aliases=["production"],
)
Raises
ValueError if name has invalid special characters
Returns
None

mark_preempting

View source

mark_preempting() -> None

Mark this run as preempting.

Also tells the internal process to immediately report this to server.

project_name

View source

project_name() -> str

Name of the W&B project associated with the run.

Note: this method is deprecated and will be removed in a future release. Please use run.project instead.

restore

View source

restore(
    name: str,
    run_path: (str | None) = None,
    replace: bool = (False),
    root: (str | None) = None
) -> (None | TextIO)

Download the specified file from cloud storage.

File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.

Args
name the name of the file
run_path optional path to a run to pull files from, i.e. username/project_name/run_id if wandb.init has not been called, this is required.
replace whether to download the file even if it already exists locally
root the directory to download the file to. Defaults to the current directory or the run directory if wandb.init was called.
Returns
None if it can’t find the file, otherwise a file object open for reading
Raises
wandb.CommError if we can’t connect to the wandb backend
ValueError if the file is not found or can’t find run_path

save

View source

save(
    glob_str: (str | os.PathLike | None) = None,
    base_path: (str | os.PathLike | None) = None,
    policy: PolicyName = "live"
) -> (bool | list[str])

Sync one or more files to W&B.

Relative paths are relative to the current working directory.

A Unix glob, such as “myfiles/*”, is expanded at the time save is called regardless of the policy. In particular, new files are not picked up automatically.

A base_path may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str, and the directory structure beneath it is preserved. It’s best understood through examples:

wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.

wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.

wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.

wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.

wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
#    of "files/".

Note: when given an absolute path or glob and no base_path, one directory level is preserved as in the example above.

Args
glob_str A relative or absolute path or Unix glob.
base_path A path to use to infer a directory structure; see examples.
policy One of live, now, or end. * live: upload the file as it changes, overwriting the previous version * now: upload the file once now * end: upload file when the run ends
Returns
Paths to the symlinks created for the matched files. For historical reasons, this may return a boolean in legacy code.

status

View source

status() -> RunStatus

Get sync info from the internal backend, about the current run’s sync status.

to_html

View source

to_html(
    height: int = 420,
    hidden: bool = (False)
) -> str

Generate HTML containing an iframe displaying the current run.

unwatch

View source

unwatch(
    models: (torch.nn.Module | Sequence[torch.nn.Module] | None) = None
) -> None

Remove pytorch model topology, gradient and parameter hooks.

Args
models (torch.nn.Module Sequence[torch.nn.Module]): Optional list of pytorch models that have had watch called on them

upsert_artifact

View source

upsert_artifact(
    artifact_or_path: (Artifact | str),
    name: (str | None) = None,
    type: (str | None) = None,
    aliases: (list[str] | None) = None,
    distributed_id: (str | None) = None
) -> Artifact

Declare (or append to) a non-finalized artifact as output of a run.

Note that you must call run.finish_artifact() to finalize the artifact. This is useful when distributed jobs need to all contribute to the same artifact.

Args
artifact_or_path (str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
name (str, optional) An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
type (str) The type of artifact to log, examples include dataset, model
aliases (list, optional) Aliases to apply to this artifact, defaults to ["latest"]
distributed_id (string, optional) Unique string that all distributed jobs share. If None, defaults to the run’s group name.
Returns
An Artifact object.

use_artifact

View source

use_artifact(
    artifact_or_name: (str | Artifact),
    type: (str | None) = None,
    aliases: (list[str] | None) = None,
    use_as: (str | None) = None
) -> Artifact

Declare an artifact as an input to a run.

Call download or file on the returned object to get the contents locally.

Args
artifact_or_name (str or Artifact) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: - name:version - name:alias You can also pass an Artifact object created by calling wandb.Artifact
type (str, optional) The type of artifact to use.
aliases (list, optional) Aliases to apply to this artifact
use_as (string, optional) Optional string indicating what purpose the artifact was used with. Will be shown in UI.
Returns
An Artifact object.

use_model

View source

use_model(
    name: str
) -> FilePathStr

Download the files logged in a model artifact ’name’.

Args
name (str) A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms: - model_artifact_name:version - model_artifact_name:alias

Examples:

run.use_model(
    name="my_model_artifact:latest",
)

run.use_model(
    name="my_project/my_model_artifact:v0",
)

run.use_model(
    name="my_entity/my_project/my_model_artifact:<digest>",
)

Invalid usage

run.use_model(
    name="my_entity/my_project/my_model_artifact",
)
Raises
AssertionError if model artifact ’name’ is of a type that does not contain the substring ‘model’.
Returns
path (str) path to downloaded model artifact file(s).

watch

View source

watch(
    models: (torch.nn.Module | Sequence[torch.nn.Module]),
    criterion: (torch.F | None) = None,
    log: (Literal['gradients', 'parameters', 'all'] | None) = "gradients",
    log_freq: int = 1000,
    idx: (int | None) = None,
    log_graph: bool = (False)
) -> None

Hooks into the given PyTorch model(s) to monitor gradients and the model’s computational graph.

This function can track parameters, gradients, or both during training. It should be extended to support arbitrary machine learning models in the future.

Args
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]): A single model or a sequence of models to be monitored. criterion (Optional[torch.F]): The loss function being optimized (optional). log (Optional[Literal[“gradients”, “parameters”, “all”]]): Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”) log_freq (int): Frequency (in batches) to log gradients and parameters. (default=1000) idx (Optional[int]): Index used when tracking multiple models with wandb.watch. (default=None) log_graph (bool): Whether to log the model’s computational graph. (default=False)
Raises
ValueError If wandb.init has not been called or if any of the models are not instances of torch.nn.Module.

__enter__

View source

__enter__() -> Run

__exit__

View source

__exit__(
    exc_type: type[BaseException],
    exc_val: BaseException,
    exc_tb: TracebackType
) -> bool

4.15 - save

Sync one or more files to W&B.

save(
    glob_str: (str | os.PathLike | None) = None,
    base_path: (str | os.PathLike | None) = None,
    policy: PolicyName = "live"
) -> (bool | list[str])

Relative paths are relative to the current working directory.

A Unix glob, such as “myfiles/*”, is expanded at the time save is called regardless of the policy. In particular, new files are not picked up automatically.

A base_path may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str, and the directory structure beneath it is preserved. It’s best understood through examples:

wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.

wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.

wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.

wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.

wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
#    of "files/".

Note: when given an absolute path or glob and no base_path, one directory level is preserved as in the example above.

Args
glob_str A relative or absolute path or Unix glob.
base_path A path to use to infer a directory structure; see examples.
policy One of live, now, or end. * live: upload the file as it changes, overwriting the previous version * now: upload the file once now * end: upload file when the run ends
Returns
Paths to the symlinks created for the matched files. For historical reasons, this may return a boolean in legacy code.

4.16 - sweep

Initialize a hyperparameter sweep.

sweep(
    sweep: Union[dict, Callable],
    entity: Optional[str] = None,
    project: Optional[str] = None,
    prior_runs: Optional[List[str]] = None
) -> str

Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.

Make note the unique identifier, sweep_id, that is returned. At a later step provide the sweep_id to a sweep agent.

Args
sweep The configuration of a hyperparameter search. (or configuration generator). See Sweep configuration structure for information on how to define your sweep. If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec.
entity The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
project The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.
prior_runs The run IDs of existing runs to add to this sweep.
Returns
sweep_id str. A unique identifier for the sweep.

4.17 - wandb_workspaces

Classes

class reports: Python library for programmatically working with W&B Reports API.

class workspaces: Python library for programmatically working with W&B Workspace API.

4.17.1 - Reports

module wandb_workspaces.reports.v2

Python library for programmatically working with W&B Reports API.

import wandb_workspaces.reports.v2 as wr

report = wr.Report(
     entity="entity",
     project="project",
     title="An amazing title",
     description="A descriptive description.",
)

blocks = [
     wr.PanelGrid(
         panels=[
             wr.LinePlot(x="time", y="velocity"),
             wr.ScatterPlot(x="time", y="acceleration"),
         ]
     )
]

report.blocks = blocks
report.save()

class BarPlot

A panel object that shows a 2D bar plot.

Attributes:

  • title (Optional[str]): The text that appears at the top of the plot.
  • metrics (LList[MetricType]): orientation Literal[“v”, “h”]: The orientation of the bar plot. Set to either vertical (“v”) or horizontal (“h”). Defaults to horizontal (“h”).
  • range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
  • title_x (Optional[str]): The label of the x-axis.
  • title_y (Optional[str]): The label of the y-axis.
  • groupby (Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.
  • groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.
  • groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, =samples, or None.
  • max_runs_to_show (Optional[int]): The maximum number of runs to show on the plot.
  • max_bars_to_show (Optional[int]): The maximum number of bars to show on the bar plot.
  • custom_expressions (Optional[LList[str]]): A list of custom expressions to be used in the bar plot.
  • legend_template (Optional[str]): The template for the legend.
  • font_size ( Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.
  • line_titles (Optional[dict]): The titles of the lines. The keys are the line names and the values are the titles.
  • line_colors (Optional[dict]): The colors of the lines. The keys are the line names and the values are the colors.

class BlockQuote

A block of quoted text.

Attributes:

  • text (str): The text of the block quote.

class CalloutBlock

A block of callout text.

Attributes:

  • text (str): The callout text.

class CheckedList

A list of items with checkboxes. Add one or more CheckedListItem within CheckedList.

Attributes:

  • items (LList[CheckedListItem]): A list of one or more CheckedListItem objects.

class CheckedListItem

A list item with a checkbox. Add one or more CheckedListItem within CheckedList.

Attributes:

  • text (str): The text of the list item.
  • checked (bool): Whether the checkbox is checked. By default, set to False.

class CodeBlock

A block of code.

Attributes:

  • code (str): The code in the block.
  • language (Optional[Language]): The language of the code. Language specified is used for syntax highlighting. By default, set to python. Options include javascript, python, css, json, html, markdown, yaml.

class CodeComparer

A panel object that compares the code between two different runs.

Attributes:

  • diff (Literal['split', 'unified']): How to display code differences. Options include split and unified.

class Config

Metrics logged to a run’s config object. Config objects are commonly logged using run.config[name] = ... or passing a config as a dictionary of key-value pairs, where the key is the name of the metric and the value is the value of that metric.

Attributes:

  • name (str): The name of the metric.

class CustomChart

A panel that shows a custom chart. The chart is defined by a weave query.

Attributes:

  • query (dict): The query that defines the custom chart. The key is the name of the field, and the value is the query.
  • chart_name (str): The title of the custom chart.
  • chart_fields (dict): Key-value pairs that define the axis of the plot. Where the key is the label, and the value is the metric.
  • chart_strings (dict): Key-value pairs that define the strings in the chart.

classmethod from_table

from_table(
    table_name: str,
    chart_fields: dict = None,
    chart_strings: dict = None
)

Create a custom chart from a table.

Arguments:

  • table_name (str): The name of the table.
  • chart_fields (dict): The fields to display in the chart.
  • chart_strings (dict): The strings to display in the chart.

A block that renders a gallery of reports and URLs.

Attributes:

  • items (List[Union[GalleryReport, GalleryURL]]): A list of GalleryReport and GalleryURL objects.

class GalleryReport

A reference to a report in the gallery.

Attributes:

  • report_id (str): The ID of the report.

class GalleryURL

A URL to an external resource.

Attributes:

  • url (str): The URL of the resource.
  • title (Optional[str]): The title of the resource.
  • description (Optional[str]): The description of the resource.
  • image_url (Optional[str]): The URL of an image to display.

class GradientPoint

A point in a gradient.

Attributes:

  • color: The color of the point.
  • offset: The position of the point in the gradient. The value should be between 0 and 100.

class H1

An H1 heading with the text specified.

Attributes:

  • text (str): The text of the heading.
  • collapsed_blocks (Optional[LList[“BlockTypes”]]): The blocks to show when the heading is collapsed.

class H2

An H2 heading with the text specified.

Attributes:

  • text (str): The text of the heading.
  • collapsed_blocks (Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.

class H3

An H3 heading with the text specified.

Attributes:

  • text (str): The text of the heading.
  • collapsed_blocks (Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.

class Heading


class HorizontalRule

HTML horizontal line.


class Image

A block that renders an image.

Attributes:

  • url (str): The URL of the image.
  • caption (str): The caption of the image. Caption appears underneath the image.

class InlineCode

Inline code. Does not add newline character after code.

Attributes:

  • text (str): The code you want to appear in the report.

class InlineLatex

Inline LaTeX markdown. Does not add newline character after the LaTeX markdown.

Attributes:

  • text (str): LaTeX markdown you want to appear in the report.

class LatexBlock

A block of LaTeX text.

Attributes:

  • text (str): The LaTeX text.

class Layout

The layout of a panel in a report. Adjusts the size and position of the panel.

Attributes:

  • x (int): The x position of the panel.
  • y (int): The y position of the panel.
  • w (int): The width of the panel.
  • h (int): The height of the panel.

class LinePlot

A panel object with 2D line plots.

Attributes:

  • title (Optional[str]): The text that appears at the top of the plot.
  • x (Optional[MetricType]): The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.
  • y (LList[MetricType]): One or more metrics logged to your W&B project that the report pulls information from. The metric specified is used for the y-axis.
  • range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
  • range_y (Tuple[float | None, float | None]): Tuple that specifies the range of the y-axis.
  • log_x (Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.
  • log_y (Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.
  • title_x (Optional[str]): The label of the x-axis.
  • title_y (Optional[str]): The label of the y-axis.
  • ignore_outliers (Optional[bool]): If set to True, do not plot outliers.
  • groupby (Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.
  • groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.
  • groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, samples, or None.
  • smoothing_factor (Optional[float]): The smoothing factor to apply to the smoothing type. Accepted values range between 0 and 1.
  • smoothing_type Optional[SmoothingType]: Apply a filter based on the specified distribution. Options include exponentialTimeWeighted, exponential, gaussian, average, or none.
  • smoothing_show_original (Optional[bool]): If set to True, show the original data.
  • max_runs_to_show (Optional[int]): The maximum number of runs to show on the line plot.
  • custom_expressions (Optional[LList[str]]): Custom expressions to apply to the data.
  • plot_type Optional[LinePlotStyle]: The type of line plot to generate. Options include line, stacked-area, or pct-area.
  • font_size Optional[FontSize]: The size of the line plot’s font. Options include small, medium, large, auto, or None.
  • legend_position Optional[LegendPosition]: Where to place the legend. Options include north, south, east, west, or None.
  • legend_template (Optional[str]): The template for the legend.
  • aggregate (Optional[bool]): If set to True, aggregate the data.
  • xaxis_expression (Optional[str]): The expression for the x-axis.
  • legend_fields (Optional[LList[str]]): The fields to include in the legend.

A link to a URL.

Attributes:

  • text (Union[str, TextWithInlineComments]): The text of the link.
  • url (str): The URL the link points to.

class MarkdownBlock

A block of markdown text. Useful if you want to write text that uses common markdown syntax.

Attributes:

  • text (str): The markdown text.

class MarkdownPanel

A panel that renders markdown.

Attributes:

  • markdown (str): The text you want to appear in the markdown panel.

class MediaBrowser

A panel that displays media files in a grid layout.

Attributes:

  • num_columns (Optional[int]): The number of columns in the grid.
  • media_keys (LList[str]): A list of media keys that correspond to the media files.

class Metric

A metric to display in a report that is logged in your project.

Attributes:

  • name (str): The name of the metric.

class OrderBy

A metric to order by.

Attributes:

  • name (str): The name of the metric.
  • ascending (bool): Whether to sort in ascending order. By default set to False.

class OrderedList

A list of items in a numbered list.

Attributes:

  • items (LList[str]): A list of one or more OrderedListItem objects.

class OrderedListItem

A list item in an ordered list.

Attributes:

  • text (str): The text of the list item.

class P

A paragraph of text.

Attributes:

  • text (str): The text of the paragraph.

class Panel

A panel that displays a visualization in a panel grid.

Attributes:

  • layout (Layout): A Layout object.

class PanelGrid

A grid that consists of runsets and panels. Add runsets and panels with Runset and Panel objects, respectively.

Available panels include: LinePlot, ScatterPlot, BarPlot, ScalarChart, CodeComparer, ParallelCoordinatesPlot, ParameterImportancePlot, RunComparer, MediaBrowser, MarkdownPanel, CustomChart, WeavePanel, WeavePanelSummaryTable, WeavePanelArtifactVersionedFile.

Attributes:

  • runsets (LList[“Runset”]): A list of one or more Runset objects.
  • panels (LList[“PanelTypes”]): A list of one or more Panel objects.
  • active_runset (int): The number of runs you want to display within a runset. By default, it is set to 0.
  • custom_run_colors (dict): Key-value pairs where the key is the name of a run and the value is a color specified by a hexadecimal value.

class ParallelCoordinatesPlot

A panel object that shows a parallel coordinates plot.

Attributes:

  • columns (LList[ParallelCoordinatesPlotColumn]): A list of one or more ParallelCoordinatesPlotColumn objects.
  • title (Optional[str]): The text that appears at the top of the plot.
  • gradient (Optional[LList[GradientPoint]]): A list of gradient points.
  • font_size (Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.

class ParallelCoordinatesPlotColumn

A column within a parallel coordinates plot. The order of metrics specified determine the order of the parallel axis (x-axis) in the parallel coordinates plot.

Attributes:

  • metric (str | Config | SummaryMetric): The name of the metric logged to your W&B project that the report pulls information from.
  • display_name (Optional[str]): The name of the metric
  • inverted (Optional[bool]): Whether to invert the metric.
  • log (Optional[bool]): Whether to apply a log transformation to the metric.

class ParameterImportancePlot

A panel that shows how important each hyperparameter is in predicting the chosen metric.

Attributes:

  • with_respect_to (str): The metric you want to compare the parameter importance against. Common metrics might include the loss, accuracy, and so forth. The metric you specify must be logged within the project that the report pulls information from.

class Report

An object that represents a W&B Report. Use the returned object’s blocks attribute to customize your report. Report objects do not automatically save. Use the save() method to persists changes.

Attributes:

  • project (str): The name of the W&B project you want to load in. The project specified appears in the report’s URL.
  • entity (str): The W&B entity that owns the report. The entity appears in the report’s URL.
  • title (str): The title of the report. The title appears at the top of the report as an H1 heading.
  • description (str): A description of the report. The description appears underneath the report’s title.
  • blocks (LList[BlockTypes]): A list of one or more HTML tags, plots, grids, runsets, and more.
  • width (Literal[‘readable’, ‘fixed’, ‘fluid’]): The width of the report. Options include ‘readable’, ‘fixed’, ‘fluid’.

property url

The URL where the report is hosted. The report URL consists of https://wandb.ai/{entity}/{project_name}/reports/. Where {entity} and {project_name} consists of the entity that the report belongs to and the name of the project, respectively.


classmethod from_url

from_url(url: str, as_model: bool = False)

Load in the report into current environment. Pass in the URL where the report is hosted.

Arguments:

  • url (str): The URL where the report is hosted.
  • as_model (bool): If True, return the model object instead of the Report object. By default, set to False.

method save

save(draft: bool = False, clone: bool = False)

Persists changes made to a report object.


method to_html

to_html(height: int = 1024, hidden: bool = False)  str

Generate HTML containing an iframe displaying this report. Commonly used to within a Python notebook.

Arguments:

  • height (int): Height of the iframe.
  • hidden (bool): If True, hide the iframe. Default set to False.

class RunComparer

A panel that compares metrics across different runs from the project the report pulls information from.

Attributes:

  • diff_only (Optional[Literal["split", True]]): Display only the difference across runs in a project. You can toggle this feature on and off in the W&B Report UI.

class Runset

A set of runs to display in a panel grid.

Attributes:

  • entity (str): An entity that owns or has the correct permissions to the project where the runs are stored.
  • project (str): The name of the project were the runs are stored.
  • name (str): The name of the run set. Set to Run set by default.
  • query (str): A query string to filter runs.
  • filters (Optional[str]): A filter string to filter runs.
  • groupby (LList[str]): A list of metric names to group by.
  • order (LList[OrderBy]): A list of OrderBy objects to order by.
  • custom_run_colors (LList[OrderBy]): A dictionary mapping run IDs to colors.

class RunsetGroup

UI element that shows a group of runsets.

Attributes:

  • runset_name (str): The name of the runset.
  • keys (Tuple[RunsetGroupKey, …]): The keys to group by. Pass in one or more RunsetGroupKey objects to group by.

class RunsetGroupKey

Groups runsets by a metric type and value. Part of a RunsetGroup. Specify the metric type and value to group by as key-value pairs.

Attributes:

  • key (Type[str] | Type[Config] | Type[SummaryMetric] | Type[Metric]): The metric type to group by.
  • value (str): The value of the metric to group by.

class ScalarChart

A panel object that shows a scalar chart.

Attributes:

  • title (Optional[str]): The text that appears at the top of the plot.
  • metric (MetricType): The name of a metric logged to your W&B project that the report pulls information from.
  • groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.
  • groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, samples, or None.
  • custom_expressions (Optional[LList[str]]): A list of custom expressions to be used in the scalar chart.
  • legend_template (Optional[str]): The template for the legend.
  • font_size Optional[FontSize]: The size of the line plot’s font. Options include small, medium, large, auto, or None.

class ScatterPlot

A panel object that shows a 2D or 3D scatter plot.

Arguments:

  • title (Optional[str]): The text that appears at the top of the plot.
  • x Optional[SummaryOrConfigOnlyMetric]: The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.
  • y Optional[SummaryOrConfigOnlyMetric]: One or more metrics logged to your W&B project that the report pulls information from. Metrics specified are plotted within the y-axis. z Optional[SummaryOrConfigOnlyMetric]:
  • range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
  • range_y (Tuple[float | None, float | None]): Tuple that specifies the range of the y-axis.
  • range_z (Tuple[float | None, float | None]): Tuple that specifies the range of the z-axis.
  • log_x (Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.
  • log_y (Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.
  • log_z (Optional[bool]): Plots the z-coordinates using a base-10 logarithmic scale.
  • running_ymin (Optional[bool]): Apply a moving average or rolling mean.
  • running_ymax (Optional[bool]): Apply a moving average or rolling mean.
  • running_ymean (Optional[bool]): Apply a moving average or rolling mean.
  • legend_template (Optional[str]): A string that specifies the format of the legend.
  • gradient (Optional[LList[GradientPoint]]): A list of gradient points that specify the color gradient of the plot.
  • font_size (Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.
  • regression (Optional[bool]): If True, a regression line is plotted on the scatter plot.

class SoundCloud

A block that renders a SoundCloud player.

Attributes:

  • html (str): The HTML code to embed the SoundCloud player.

class Spotify

A block that renders a Spotify player.

Attributes:

  • spotify_id (str): The Spotify ID of the track or playlist.

class SummaryMetric

A summary metric to display in a report.

Attributes:

  • name (str): The name of the metric.

class TableOfContents

A block that contains a list of sections and subsections using H1, H2, and H3 HTML blocks specified in a report.


class TextWithInlineComments

A block of text with inline comments.

Attributes:

  • text (str): The text of the block.

class Twitter

A block that displays a Twitter feed.

Attributes:

  • html (str): The HTML code to display the Twitter feed.

class UnorderedList

A list of items in a bulleted list.

Attributes:

  • items (LList[str]): A list of one or more UnorderedListItem objects.

class UnorderedListItem

A list item in an unordered list.

Attributes:

  • text (str): The text of the list item.

class Video

A block that renders a video.

Attributes:

  • url (str): The URL of the video.

class WeaveBlockArtifact

A block that shows an artifact logged to W&B. The query takes the form of

project('entity', 'project').artifact('artifact-name')

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.

Attributes:

  • entity (str): The entity that owns or has the appropriate permissions to the project where the artifact is stored.
  • project (str): The project where the artifact is stored.
  • artifact (str): The name of the artifact to retrieve.
  • tab Literal["overview", "metadata", "usage", "files", "lineage"]: The tab to display in the artifact panel.

class WeaveBlockArtifactVersionedFile

A block that shows a versioned file logged to a W&B artifact. The query takes the form of

project('entity', 'project').artifactVersion('name', 'version').file('file-name')

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.

Attributes:

  • entity (str): The entity that owns or has the appropriate permissions to the project where the artifact is stored.
  • project (str): The project where the artifact is stored.
  • artifact (str): The name of the artifact to retrieve.
  • version (str): The version of the artifact to retrieve.
  • file (str): The name of the file stored in the artifact to retrieve.

class WeaveBlockSummaryTable

A block that shows a W&B Table, pandas DataFrame, plot, or other value logged to W&B. The query takes the form of

project('entity', 'project').runs.summary['value']

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.

Attributes:

  • entity (str): The entity that owns or has the appropriate permissions to the project where the values are logged.
  • project (str): The project where the value is logged in.
  • table_name (str): The name of the table, DataFrame, plot, or value.

class WeavePanel

An empty query panel that can be used to display custom content using queries.

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.


class WeavePanelArtifact

A panel that shows an artifact logged to W&B.

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.

Attributes:

  • artifact (str): The name of the artifact to retrieve.
  • tab Literal["overview", "metadata", "usage", "files", "lineage"]: The tab to display in the artifact panel.

class WeavePanelArtifactVersionedFile

A panel that shows a versioned file logged to a W&B artifact.

project('entity', 'project').artifactVersion('name', 'version').file('file-name')

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.

Attributes:

  • artifact (str): The name of the artifact to retrieve.
  • version (str): The version of the artifact to retrieve.
  • file (str): The name of the file stored in the artifact to retrieve.

class WeavePanelSummaryTable

A panel that shows a W&B Table, pandas DataFrame, plot, or other value logged to W&B. The query takes the form of

runs.summary['value']

The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.

Attributes:

  • table_name (str): The name of the table, DataFrame, plot, or value.

4.17.2 - Workspaces

module wandb_workspaces.workspaces

Python library for programmatically working with W&B Workspace API.

# How to import
import wandb_workspaces.workspaces as ws

# Example of creating a workspace
ws.Workspace(
     name="Example W&B Workspace",
     entity="entity", # entity that owns the workspace
     project="project", # project that the workspace is associated with
     sections=[
         ws.Section(
             name="Validation Metrics",
             panels=[
                 wr.LinePlot(x="Step", y=["val_loss"]),
                 wr.BarPlot(metrics=["val_accuracy"]),
                 wr.ScalarChart(metric="f1_score", groupby_aggfunc="mean"),
             ],
             is_open=True,
         ),
     ],
)
workspace.save()

class RunSettings

Settings for a run in a runset (left hand bar).

Attributes:

  • color (str): The color of the run in the UI. Can be hex (#ff0000), css color (red), or rgb (rgb(255, 0, 0))
  • disabled (bool): Whether the run is deactivated (eye closed in the UI). Default is set to False.

class RunsetSettings

Settings for the runset (the left bar containing runs) in a workspace.

Attributes:

  • query (str): A query to filter the runset (can be a regex expr, see next param).
  • regex_query (bool): Controls whether the query (above) is a regex expr. Default is set to False.
  • filters (LList[expr.FilterExpr]): A list of filters to apply to the runset. Filters are AND’d together. See FilterExpr for more information on creating filters.
  • groupby (LList[expr.MetricType]): A list of metrics to group by in the runset. Set to Metric, Summary, Config, Tags, or KeysInfo.
  • order (LList[expr.Ordering]): A list of metrics and ordering to apply to the runset.
  • run_settings (Dict[str, RunSettings]): A dictionary of run settings, where the key is the run’s ID and the value is a RunSettings object.

class Section

Represents a section in a workspace.

Attributes:

  • name (str): The name/title of the section.
  • panels (LList[PanelTypes]): An ordered list of panels in the section. By default, first is top-left and last is bottom-right.
  • is_open (bool): Whether the section is open or closed. Default is closed.
  • layout_settings (Literal[standard, custom]): Settings for panel layout in the section.
  • panel_settings: Panel-level settings applied to all panels in the section, similar to WorkspaceSettings for a Section.

class SectionLayoutSettings

Panel layout settings for a section, typically seen at the top right of the section of the W&B App Workspace UI.

Attributes:

  • layout (Literal[standard, custom]): The layout of panels in the section. standard follows the default grid layout, custom allows per per-panel layouts controlled by the individual panel settings.
  • columns (int): In a standard layout, the number of columns in the layout. Default is 3.
  • rows (int): In a standard layout, the number of rows in the layout. Default is 2.

class SectionPanelSettings

Panel settings for a section, similar to WorkspaceSettings for a section.

Settings applied here can be overrided by more granular Panel settings in this priority: Section < Panel.

Attributes:

  • x_axis (str): X-axis metric name setting. By default, set to Step.
  • x_min Optional[float]: Minimum value for the x-axis.
  • x_max Optional[float]: Maximum value for the x-axis.
  • smoothing_type (Literal[’exponentialTimeWeighted’, ’exponential’, ‘gaussian’, ‘average’, ’none’]): Smoothing type applied to all panels.
  • smoothing_weight (int): Smoothing weight applied to all panels.

class Workspace

Represents a W&B workspace, including sections, settings, and config for run sets.

Attributes:

  • entity (str): The entity this workspace will be saved to (usually user or team name).
  • project (str): The project this workspace will be saved to.
  • name: The name of the workspace.
  • sections (LList[Section]): An ordered list of sections in the workspace. The first section is at the top of the workspace.
  • settings (WorkspaceSettings): Settings for the workspace, typically seen at the top of the workspace in the UI.
  • runset_settings (RunsetSettings): Settings for the runset (the left bar containing runs) in a workspace.

property url

The URL to the workspace in the W&B app.


classmethod from_url

from_url(url: str)

Get a workspace from a URL.


method save

save()

Save the current workspace to W&B.

Returns:

  • Workspace: The updated workspace with the saved internal name and ID.

method save_as_new_view

save_as_new_view()

Save the current workspace as a new view to W&B.

Returns:

  • Workspace: The updated workspace with the saved internal name and ID.

class WorkspaceSettings

Settings for the workspace, typically seen at the top of the workspace in the UI.

This object includes settings for the x-axis, smoothing, outliers, panels, tooltips, runs, and panel query bar.

Settings applied here can be overrided by more granular Section and Panel settings in this priority: Workspace < Section < Panel

Attributes:

  • x_axis (str): X-axis metric name setting.
  • x_min (Optional[float]): Minimum value for the x-axis.
  • x_max (Optional[float]): Maximum value for the x-axis.
  • smoothing_type (Literal['exponentialTimeWeighted', 'exponential', 'gaussian', 'average', 'none']): Smoothing type applied to all panels.
  • smoothing_weight (int): Smoothing weight applied to all panels.
  • ignore_outliers (bool): Ignore outliers in all panels.
  • sort_panels_alphabetically (bool): Sorts panels in all sections alphabetically.
  • group_by_prefix (Literal[first, last]): Group panels by the first or up to last prefix (first or last). Default is set to last.
  • remove_legends_from_panels (bool): Remove legends from all panels.
  • tooltip_number_of_runs (Literal[default, all, none]): The number of runs to show in the tooltip.
  • tooltip_color_run_names (bool): Whether to color run names in the tooltip to match the runset (True) or not (False). Default is set to True.
  • max_runs (int): The maximum number of runs to show per panel (this will be the first 10 runs in the runset).
  • point_visualization_method (Literal[line, point, line_point]): The visualization method for points.
  • panel_search_query (str): The query for the panel search bar (can be a regex expression).
  • auto_expand_panel_search_results (bool): Whether to auto expand the panel search results.

4.18 - watch

Hooks into the given PyTorch model(s) to monitor gradients and the model’s computational graph.

watch(
    models: (torch.nn.Module | Sequence[torch.nn.Module]),
    criterion: (torch.F | None) = None,
    log: (Literal['gradients', 'parameters', 'all'] | None) = "gradients",
    log_freq: int = 1000,
    idx: (int | None) = None,
    log_graph: bool = (False)
) -> None

This function can track parameters, gradients, or both during training. It should be extended to support arbitrary machine learning models in the future.

Args
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]): A single model or a sequence of models to be monitored. criterion (Optional[torch.F]): The loss function being optimized (optional). log (Optional[Literal[“gradients”, “parameters”, “all”]]): Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”) log_freq (int): Frequency (in batches) to log gradients and parameters. (default=1000) idx (Optional[int]): Index used when tracking multiple models with wandb.watch. (default=None) log_graph (bool): Whether to log the model’s computational graph. (default=False)
Raises
ValueError If wandb.init has not been called or if any of the models are not instances of torch.nn.Module.

5 - Python Reference

Custom Charts

Create custom charts and visualizations.

Analytics and Query API

Query and analyze data logged to W&B.

Automations

Automate your W&B workflows.

Python SDK

Train and fine-tune models, manage models from experimentation to production.

5.1 - Custom Charts

Create custom charts and visualizations.

5.1.1 - bar()

function bar

bar(
    table: 'wandb.Table',
    label: 'str',
    value: 'str',
    title: 'str' = '',
    split_table: 'bool' = False
)  CustomChart

Constructs a bar chart from a wandb.Table of data.

Args:

  • table: A table containing the data for the bar chart.
  • label: The name of the column to use for the labels of each bar.
  • value: The name of the column to use for the values of each bar.
  • title: The title of the bar chart.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Example:

import random
import wandb

# Generate random data for the table
data = [
    ["car", random.uniform(0, 1)],
    ["bus", random.uniform(0, 1)],
    ["road", random.uniform(0, 1)],
    ["person", random.uniform(0, 1)],
]

# Create a table with the data
table = wandb.Table(data=data, columns=["class", "accuracy"])

# Initialize a W&B run and log the bar plot
with wandb.init(project="bar_chart") as run:
    # Create a bar plot from the table
    bar_plot = wandb.plot.bar(
         table=table,
         label="class",
         value="accuracy",
         title="Object Classification Accuracy",
    )

    # Log the bar chart to W&B
    run.log({"bar_plot": bar_plot})

5.1.2 - confusion_matrix()

function confusion_matrix

confusion_matrix(
    probs: 'Sequence[Sequence[float]] | None' = None,
    y_true: 'Sequence[T] | None' = None,
    preds: 'Sequence[T] | None' = None,
    class_names: 'Sequence[str] | None' = None,
    title: 'str' = 'Confusion Matrix Curve',
    split_table: 'bool' = False
)  CustomChart

Constructs a confusion matrix from a sequence of probabilities or predictions.

Args:

  • probs: A sequence of predicted probabilities for each class. The sequence shape should be (N, K) where N is the number of samples and K is the number of classes. If provided, preds should not be provided.
  • y_true: A sequence of true labels.
  • preds: A sequence of predicted class labels. If provided, probs should not be provided.
  • class_names: Sequence of class names. If not provided, class names will be defined as “Class_1”, “Class_2”, etc.
  • title: Title of the confusion matrix chart.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Raises:

  • ValueError: If both probs and preds are provided or if the number of predictions and true labels are not equal. If the number of unique predicted classes exceeds the number of class names or if the number of unique true labels exceeds the number of class names.
  • wandb.Error: If numpy is not installed.

Examples: Logging a confusion matrix with random probabilities for wildlife classification:

import numpy as np
import wandb

# Define class names for wildlife
wildlife_class_names = ["Lion", "Tiger", "Elephant", "Zebra"]

# Generate random true labels (0 to 3 for 10 samples)
wildlife_y_true = np.random.randint(0, 4, size=10)

# Generate random probabilities for each class (10 samples x 4 classes)
wildlife_probs = np.random.rand(10, 4)
wildlife_probs = np.exp(wildlife_probs) / np.sum(
    np.exp(wildlife_probs),
    axis=1,
    keepdims=True,
)

# Initialize W&B run and log confusion matrix
with wandb.init(project="wildlife_classification") as run:
    confusion_matrix = wandb.plot.confusion_matrix(
         probs=wildlife_probs,
         y_true=wildlife_y_true,
         class_names=wildlife_class_names,
         title="Wildlife Classification Confusion Matrix",
    )
    run.log({"wildlife_confusion_matrix": confusion_matrix})

In this example, random probabilities are used to generate a confusion matrix.

Logging a confusion matrix with simulated model predictions and 85% accuracy:

import numpy as np
import wandb

# Define class names for wildlife
wildlife_class_names = ["Lion", "Tiger", "Elephant", "Zebra"]

# Simulate true labels for 200 animal images (imbalanced distribution)
wildlife_y_true = np.random.choice(
    [0, 1, 2, 3],
    size=200,
    p=[0.2, 0.3, 0.25, 0.25],
)

# Simulate model predictions with 85% accuracy
wildlife_preds = [
    y_t
    if np.random.rand() < 0.85
    else np.random.choice([x for x in range(4) if x != y_t])
    for y_t in wildlife_y_true
]

# Initialize W&B run and log confusion matrix
with wandb.init(project="wildlife_classification") as run:
    confusion_matrix = wandb.plot.confusion_matrix(
         preds=wildlife_preds,
         y_true=wildlife_y_true,
         class_names=wildlife_class_names,
         title="Simulated Wildlife Classification Confusion Matrix",
    )
    run.log({"wildlife_confusion_matrix": confusion_matrix})

In this example, predictions are simulated with 85% accuracy to generate a confusion matrix.

5.1.3 - histogram()

function histogram

histogram(
    table: 'wandb.Table',
    value: 'str',
    title: 'str' = '',
    split_table: 'bool' = False
)  CustomChart

Constructs a histogram chart from a W&B Table.

Args:

  • table: The W&B Table containing the data for the histogram.
  • value: The label for the bin axis (x-axis).
  • title: The title of the histogram plot.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Example:

import math
import random
import wandb

# Generate random data
data = [[i, random.random() + math.sin(i / 10)] for i in range(100)]

# Create a W&B Table
table = wandb.Table(
    data=data,
    columns=["step", "height"],
)

# Create a histogram plot
histogram = wandb.plot.histogram(
    table,
    value="height",
    title="My Histogram",
)

# Log the histogram plot to W&B
with wandb.init(...) as run:
    run.log({"histogram-plot1": histogram})

5.1.4 - line_series()

function line_series

line_series(
    xs: 'Iterable[Iterable[Any]] | Iterable[Any]',
    ys: 'Iterable[Iterable[Any]]',
    keys: 'Iterable[str] | None' = None,
    title: 'str' = '',
    xname: 'str' = 'x',
    split_table: 'bool' = False
)  CustomChart

Constructs a line series chart.

Args:

  • xs: Sequence of x values. If a singular array is provided, all y values are plotted against that x array. If an array of arrays is provided, each y value is plotted against the corresponding x array.
  • ys: Sequence of y values, where each iterable represents a separate line series.
  • keys: Sequence of keys for labeling each line series. If not provided, keys will be automatically generated as “line_1”, “line_2”, etc.
  • title: Title of the chart.
  • xname: Label for the x-axis.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Examples: Logging a single x array where all y series are plotted against the same x values:

import wandb

# Initialize W&B run
with wandb.init(project="line_series_example") as run:
    # x values shared across all y series
    xs = list(range(10))

    # Multiple y series to plot
    ys = [
         [i for i in range(10)],  # y = x
         [i**2 for i in range(10)],  # y = x^2
         [i**3 for i in range(10)],  # y = x^3
    ]

    # Generate and log the line series chart
    line_series_chart = wandb.plot.line_series(
         xs,
         ys,
         title="title",
         xname="step",
    )
    run.log({"line-series-single-x": line_series_chart})

In this example, a single xs series (shared x-values) is used for all ys series. This results in each y-series being plotted against the same x-values (0-9).

Logging multiple x arrays where each y series is plotted against its corresponding x array:

import wandb

# Initialize W&B run
with wandb.init(project="line_series_example") as run:
    # Separate x values for each y series
    xs = [
         [i for i in range(10)],  # x for first series
         [2 * i for i in range(10)],  # x for second series (stretched)
         [3 * i for i in range(10)],  # x for third series (stretched more)
    ]

    # Corresponding y series
    ys = [
         [i for i in range(10)],  # y = x
         [i**2 for i in range(10)],  # y = x^2
         [i**3 for i in range(10)],  # y = x^3
    ]

    # Generate and log the line series chart
    line_series_chart = wandb.plot.line_series(
         xs, ys, title="Multiple X Arrays Example", xname="Step"
    )
    run.log({"line-series-multiple-x": line_series_chart})

In this example, each y series is plotted against its own unique x series. This allows for more flexibility when the x values are not uniform across the data series.

Customizing line labels using keys:

import wandb

# Initialize W&B run
with wandb.init(project="line_series_example") as run:
    xs = list(range(10))  # Single x array
    ys = [
         [i for i in range(10)],  # y = x
         [i**2 for i in range(10)],  # y = x^2
         [i**3 for i in range(10)],  # y = x^3
    ]

    # Custom labels for each line
    keys = ["Linear", "Quadratic", "Cubic"]

    # Generate and log the line series chart
    line_series_chart = wandb.plot.line_series(
         xs,
         ys,
         keys=keys,  # Custom keys (line labels)
         title="Custom Line Labels Example",
         xname="Step",
    )
    run.log({"line-series-custom-keys": line_series_chart})

This example shows how to provide custom labels for the lines using the keys argument. The keys will appear in the legend as “Linear”, “Quadratic”, and “Cubic”.

5.1.5 - line()

function line

line(
    table: 'wandb.Table',
    x: 'str',
    y: 'str',
    stroke: 'str | None' = None,
    title: 'str' = '',
    split_table: 'bool' = False
)  CustomChart

Constructs a customizable line chart.

Args:

  • table: The table containing data for the chart.
  • x: Column name for the x-axis values.
  • y: Column name for the y-axis values.
  • stroke: Column name to differentiate line strokes (e.g., for grouping lines).
  • title: Title of the chart.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Example:

import math
import random
import wandb

# Create multiple series of data with different patterns
data = []
for i in range(100):
     # Series 1: Sinusoidal pattern with random noise
     data.append([i, math.sin(i / 10) + random.uniform(-0.1, 0.1), "series_1"])
     # Series 2: Cosine pattern with random noise
     data.append([i, math.cos(i / 10) + random.uniform(-0.1, 0.1), "series_2"])
     # Series 3: Linear increase with random noise
     data.append([i, i / 10 + random.uniform(-0.5, 0.5), "series_3"])

# Define the columns for the table
table = wandb.Table(data=data, columns=["step", "value", "series"])

# Initialize wandb run and log the line chart
with wandb.init(project="line_chart_example") as run:
     line_chart = wandb.plot.line(
         table=table,
         x="step",
         y="value",
         stroke="series",  # Group by the "series" column
         title="Multi-Series Line Plot",
     )
     run.log({"line-chart": line_chart})

5.1.6 - plot

module wandb

Chart Visualization Utilities

This module offers a collection of predefined chart types, along with functionality for creating custom charts, enabling flexible visualization of your data beyond the built-in options.

5.1.7 - plot_table()

function plot_table

plot_table(
    vega_spec_name: 'str',
    data_table: 'wandb.Table',
    fields: 'dict[str, Any]',
    string_fields: 'dict[str, Any] | None' = None,
    split_table: 'bool' = False
)  CustomChart

Creates a custom charts using a Vega-Lite specification and a wandb.Table.

This function creates a custom chart based on a Vega-Lite specification and a data table represented by a wandb.Table object. The specification needs to be predefined and stored in the W&B backend. The function returns a custom chart object that can be logged to W&B using wandb.log().

Args:

  • vega_spec_name: The name or identifier of the Vega-Lite spec that defines the visualization structure.
  • data_table: A wandb.Table object containing the data to be visualized.
  • fields: A mapping between the fields in the Vega-Lite spec and the corresponding columns in the data table to be visualized.
  • string_fields: A dictionary for providing values for any string constants required by the custom visualization.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Raises:

  • wandb.Error: If data_table is not a wandb.Table object.

Example:

# Create a custom chart using a Vega-Lite spec and the data table.
import wandb

wandb.init()

data = [[1, 1], [2, 2], [3, 3], [4, 4], [5, 5]]
table = wandb.Table(data=data, columns=["x", "y"])

fields = {"x": "x", "y": "y", "title": "MY TITLE"}

# Create a custom title with `string_fields`.
my_custom_chart = wandb.plot_table(
   vega_spec_name="wandb/line/v0",
   data_table=table,
   fields=fields,
   string_fields={"title": "Title"},
)

wandb.log({"custom_chart": my_custom_chart})

5.1.8 - pr_curve()

function pr_curve

pr_curve(
    y_true: 'Iterable[T] | None' = None,
    y_probas: 'Iterable[numbers.Number] | None' = None,
    labels: 'list[str] | None' = None,
    classes_to_plot: 'list[T] | None' = None,
    interp_size: 'int' = 21,
    title: 'str' = 'Precision-Recall Curve',
    split_table: 'bool' = False
)  CustomChart

Constructs a Precision-Recall (PR) curve.

The Precision-Recall curve is particularly useful for evaluating classifiers on imbalanced datasets. A high area under the PR curve signifies both high precision (a low false positive rate) and high recall (a low false negative rate). The curve provides insights into the balance between false positives and false negatives at various threshold levels, aiding in the assessment of a model’s performance.

Args:

  • y_true: True binary labels. The shape should be (num_samples,).
  • y_probas: Predicted scores or probabilities for each class. These can be probability estimates, confidence scores, or non-thresholded decision values. The shape should be (num_samples, num_classes).
  • labels: Optional list of class names to replace numeric values in y_true for easier plot interpretation. For example, labels = ['dog', 'cat', 'owl'] will replace 0 with ‘dog’, 1 with ‘cat’, and 2 with ‘owl’ in the plot. If not provided, numeric values from y_true will be used.
  • classes_to_plot: Optional list of unique class values from y_true to be included in the plot. If not specified, all unique classes in y_true will be plotted.
  • interp_size: Number of points to interpolate recall values. The recall values will be fixed to interp_size uniformly distributed points in the range [0, 1], and the precision will be interpolated accordingly.
  • title: Title of the plot. Defaults to “Precision-Recall Curve”.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Raises:

  • wandb.Error: If NumPy, pandas, or scikit-learn is not installed.

Example:

import wandb

# Example for spam detection (binary classification)
y_true = [0, 1, 1, 0, 1]  # 0 = not spam, 1 = spam
y_probas = [
    [0.9, 0.1],  # Predicted probabilities for the first sample (not spam)
    [0.2, 0.8],  # Second sample (spam), and so on
    [0.1, 0.9],
    [0.8, 0.2],
    [0.3, 0.7],
]

labels = ["not spam", "spam"]  # Optional class names for readability

with wandb.init(project="spam-detection") as run:
    pr_curve = wandb.plot.pr_curve(
         y_true=y_true,
         y_probas=y_probas,
         labels=labels,
         title="Precision-Recall Curve for Spam Detection",
    )
    run.log({"pr-curve": pr_curve})

5.1.9 - roc_curve()

function roc_curve

roc_curve(
    y_true: 'Sequence[numbers.Number]',
    y_probas: 'Sequence[Sequence[float]] | None' = None,
    labels: 'list[str] | None' = None,
    classes_to_plot: 'list[numbers.Number] | None' = None,
    title: 'str' = 'ROC Curve',
    split_table: 'bool' = False
)  CustomChart

Constructs Receiver Operating Characteristic (ROC) curve chart.

Args:

  • y_true: The true class labels (ground truth) for the target variable. Shape should be (num_samples,).
  • y_probas: The predicted probabilities or decision scores for each class. Shape should be (num_samples, num_classes).
  • labels: Human-readable labels corresponding to the class indices in y_true. For example, if labels=['dog', 'cat'], class 0 will be displayed as ‘dog’ and class 1 as ‘cat’ in the plot. If None, the raw class indices from y_true will be used. Default is None.
  • classes_to_plot: A subset of unique class labels to include in the ROC curve. If None, all classes in y_true will be plotted. Default is None.
  • title: Title of the ROC curve plot. Default is “ROC Curve”.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Raises:

  • wandb.Error: If numpy, pandas, or scikit-learn are not found.

Example:

import numpy as np
import wandb

# Simulate a medical diagnosis classification problem with three diseases
n_samples = 200
n_classes = 3

# True labels: assign "Diabetes", "Hypertension", or "Heart Disease" to
# each sample
disease_labels = ["Diabetes", "Hypertension", "Heart Disease"]
# 0: Diabetes, 1: Hypertension, 2: Heart Disease
y_true = np.random.choice([0, 1, 2], size=n_samples)

# Predicted probabilities: simulate predictions, ensuring they sum to 1
# for each sample
y_probas = np.random.dirichlet(np.ones(n_classes), size=n_samples)

# Specify classes to plot (plotting all three diseases)
classes_to_plot = [0, 1, 2]

# Initialize a W&B run and log a ROC curve plot for disease classification
with wandb.init(project="medical_diagnosis") as run:
   roc_plot = wandb.plot.roc_curve(
        y_true=y_true,
        y_probas=y_probas,
        labels=disease_labels,
        classes_to_plot=classes_to_plot,
        title="ROC Curve for Disease Classification",
   )
   run.log({"roc-curve": roc_plot})

5.1.10 - scatter()

function scatter

scatter(
    table: 'wandb.Table',
    x: 'str',
    y: 'str',
    title: 'str' = '',
    split_table: 'bool' = False
)  CustomChart

Constructs a scatter plot from a wandb.Table of data.

Args:

  • table: The W&B Table containing the data to visualize.
  • x: The name of the column used for the x-axis.
  • y: The name of the column used for the y-axis.
  • title: The title of the scatter chart.
  • split_table: Whether the table should be split into a separate section in the W&B UI. If True, the table will be displayed in a section named “Custom Chart Tables”. Default is False.

Returns:

  • CustomChart: A custom chart object that can be logged to W&B. To log the chart, pass it to wandb.log().

Example:

import math
import random
import wandb

# Simulate temperature variations at different altitudes over time
data = [
   [i, random.uniform(-10, 20) - 0.005 * i + 5 * math.sin(i / 50)]
   for i in range(300)
]

# Create W&B table with altitude (m) and temperature (°C) columns
table = wandb.Table(data=data, columns=["altitude (m)", "temperature (°C)"])

# Initialize W&B run and log the scatter plot
with wandb.init(project="temperature-altitude-scatter") as run:
   # Create and log the scatter plot
   scatter_plot = wandb.plot.scatter(
        table=table,
        x="altitude (m)",
        y="temperature (°C)",
        title="Altitude vs Temperature",
   )
   run.log({"altitude-temperature-scatter": scatter_plot})

5.1.11 - visualize()

function visualize

visualize(id: 'str', value: 'Table')  Visualize

5.2 - Analytics and Query API

Query and analyze data logged to W&B.

5.2.1 - api

module wandb.apis.public

Use the Public API to export or update data that you have saved to W&B.

Before using this API, you’ll want to log data from your script — check the Quickstart for more details.

You might use the Public API to

  • update metadata or metrics for an experiment after it has been completed,
  • pull down your results as a dataframe for post-hoc analysis in a Jupyter notebook, or
  • check your saved model artifacts for those tagged as ready-to-deploy.

For more on using the Public API, check out our guide.

class RetryingClient

method RetryingClient.__init__

__init__(client: wandb_gql.client.Client)

property RetryingClient.app_url


property RetryingClient.server_info


method RetryingClient.execute

execute(*args, **kwargs)

method RetryingClient.version_supported

version_supported(min_version: str)  bool

class Api

Used for querying the W&B server.

Examples:

import wandb

wandb.Api()

method Api.__init__

__init__(
    overrides: Optional[Dict[str, Any]] = None,
    timeout: Optional[int] = None,
    api_key: Optional[str] = None
)  None

Initialize the API.

Args:

  • overrides (dict[str, Any] | None): You can set base_url if you are
  • using a W&B server other than https: //api.wandb.ai. You can also set defaults for entity, project, and run.
  • timeout (int | None): HTTP timeout in seconds for API requests. If not specified, the default timeout will be used.
  • api_key (str | None): API key to use for authentication. If not provided, the API key from the current environment or configuration will be used.

property Api.api_key

Returns W&B API key.


property Api.client

Returns the client object.


property Api.default_entity

Returns the default W&B entity.


property Api.user_agent

Returns W&B public user agent.


property Api.viewer

Returns the viewer object.


method Api.artifact

artifact(name: str, type: Optional[str] = None)

Returns a single artifact.

Args:

  • name: The artifact’s name. The name of an artifact resembles a filepath that consists, at a minimum, the name of the project the artifact was logged to, the name of the artifact, and the artifact’s version or alias. Optionally append the entity that logged the artifact as a prefix followed by a forward slash. If no entity is specified in the name, the Run or API setting’s entity is used.
  • type: The type of artifact to fetch.

Returns: An Artifact object.

Raises:

  • ValueError: If the artifact name is not specified.
  • ValueError: If the artifact type is specified but does not match the type of the fetched artifact.

Examples: In the proceeding code snippets “entity”, “project”, “artifact”, “version”, and “alias” are placeholders for your W&B entity, name of the project the artifact is in, the name of the artifact, and artifact’s version, respectively.

import wandb

# Specify the project, artifact's name, and the artifact's alias
wandb.Api().artifact(name="project/artifact:alias")

# Specify the project, artifact's name, and a specific artifact version
wandb.Api().artifact(name="project/artifact:version")

# Specify the entity, project, artifact's name, and the artifact's alias
wandb.Api().artifact(name="entity/project/artifact:alias")

# Specify the entity, project, artifact's name, and a specific artifact version
wandb.Api().artifact(name="entity/project/artifact:version")

Note:

This method is intended for external use only. Do not call api.artifact() within the wandb repository code.


method Api.artifact_collection

artifact_collection(type_name: str, name: str)  public.ArtifactCollection

Returns a single artifact collection by type.

You can use the returned ArtifactCollection object to retrieve information about specific artifacts in that collection, and more.

Args:

  • type_name: The type of artifact collection to fetch.
  • name: An artifact collection name. Optionally append the entity that logged the artifact as a prefix followed by a forward slash.

Returns: An ArtifactCollection object.

Examples: In the proceeding code snippet “type”, “entity”, “project”, and “artifact_name” are placeholders for the collection type, your W&B entity, name of the project the artifact is in, and the name of the artifact, respectively.

import wandb

collections = wandb.Api().artifact_collection(
    type_name="type", name="entity/project/artifact_name"
)

# Get the first artifact in the collection
artifact_example = collections.artifacts()[0]

# Download the contents of the artifact to the specified root directory.
artifact_example.download()

method Api.artifact_collection_exists

artifact_collection_exists(name: str, type: str)  bool

Whether an artifact collection exists within a specified project and entity.

Args:

  • name: An artifact collection name. Optionally append the entity that logged the artifact as a prefix followed by a forward slash. If entity or project is not specified, infer the collection from the override params if they exist. Otherwise, entity is pulled from the user settings and project will default to “uncategorized”.
  • type: The type of artifact collection.

Returns: True if the artifact collection exists, False otherwise.

Examples: In the proceeding code snippet “type”, and “collection_name” refer to the type of the artifact collection and the name of the collection, respectively.

import wandb

wandb.Api.artifact_collection_exists(type="type", name="collection_name")

method Api.artifact_collections

artifact_collections(
    project_name: str,
    type_name: str,
    per_page: int = 50
)  public.ArtifactCollections

Returns a collection of matching artifact collections.

Args:

  • project_name: The name of the project to filter on.
  • type_name: The name of the artifact type to filter on.
  • per_page: Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this.

Returns: An iterable ArtifactCollections object.


method Api.artifact_exists

artifact_exists(name: str, type: Optional[str] = None)  bool

Whether an artifact version exists within the specified project and entity.

Args:

  • name: The name of artifact. Add the artifact’s entity and project as a prefix. Append the version or the alias of the artifact with a colon. If the entity or project is not specified, W&B uses override parameters if populated. Otherwise, the entity is pulled from the user settings and the project is set to “Uncategorized”.
  • type: The type of artifact.

Returns: True if the artifact version exists, False otherwise.

Examples: In the proceeding code snippets “entity”, “project”, “artifact”, “version”, and “alias” are placeholders for your W&B entity, name of the project the artifact is in, the name of the artifact, and artifact’s version, respectively.

import wandb

wandb.Api().artifact_exists("entity/project/artifact:version")
wandb.Api().artifact_exists("entity/project/artifact:alias")

method Api.artifact_type

artifact_type(
    type_name: str,
    project: Optional[str] = None
)  public.ArtifactType

Returns the matching ArtifactType.

Args:

  • type_name: The name of the artifact type to retrieve.
  • project: If given, a project name or path to filter on.

Returns: An ArtifactType object.


method Api.artifact_types

artifact_types(project: Optional[str] = None)  public.ArtifactTypes

Returns a collection of matching artifact types.

Args:

  • project: The project name or path to filter on.

Returns: An iterable ArtifactTypes object.


method Api.artifact_versions

artifact_versions(type_name, name, per_page=50)

Deprecated. Use Api.artifacts(type_name, name) method instead.


method Api.artifacts

artifacts(
    type_name: str,
    name: str,
    per_page: int = 50,
    tags: Optional[List[str]] = None
)  public.Artifacts

Return an Artifacts collection.

Args: type_name: The type of artifacts to fetch. name: The artifact’s collection name. Optionally append the entity that logged the artifact as a prefix followed by a forward slash. per_page: Sets the page size for query pagination. If set to None, use the default size. Usually there is no reason to change this. tags: Only return artifacts with all of these tags.

Returns: An iterable Artifacts object.

Examples: In the proceeding code snippet, “type”, “entity”, “project”, and “artifact_name” are placeholders for the artifact type, W&B entity, name of the project the artifact was logged to, and the name of the artifact, respectively.

import wandb

wandb.Api().artifacts(type_name="type", name="entity/project/artifact_name")

method Api.automation

automation(name: str, entity: Optional[str] = None)  Automation

Returns the only Automation matching the parameters.

Args:

  • name: The name of the automation to fetch.
  • entity: The entity to fetch the automation for.

Raises:

  • ValueError: If zero or multiple Automations match the search criteria.

Examples: Get an existing automation named “my-automation”:

    import wandb

    api = wandb.Api()
    automation = api.automation(name="my-automation")
    ``` 

Get an existing automation named "other-automation", from the entity "my-team": 

```python
    automation = api.automation(name="other-automation", entity="my-team")
    ``` 

---

### <kbd>method</kbd> `Api.automations`

```python
automations(
    entity: Optional[str] = None,
    name: Optional[str] = None,
    per_page: int = 50
)  Iterator[ForwardRef('Automation')]

Returns an iterator over all Automations that match the given parameters.

If no parameters are provided, the returned iterator will contain all Automations that the user has access to.

Args:

  • entity: The entity to fetch the automations for.
  • name: The name of the automation to fetch.
  • per_page: The number of automations to fetch per page. Defaults to 50. Usually there is no reason to change this.

Returns: A list of automations.

Examples: Fetch all existing automations for the entity “my-team”:

    import wandb

    api = wandb.Api()
    automations = api.automations(entity="my-team")
    ``` 

---

### <kbd>method</kbd> `Api.create_automation`

```python
create_automation(
    obj: 'NewAutomation',
    fetch_existing: bool = False,
    **kwargs: typing_extensions.Unpack[ForwardRef('WriteAutomationsKwargs')]
)  Automation

Create a new Automation.

Args: obj: The automation to create. fetch_existing: If True, and a conflicting automation already exists, attempt to fetch the existing automation instead of raising an error. **kwargs: Any additional values to assign to the automation before creating it. If given, these will override any values that may already be set on the automation: - name: The name of the automation. - description: The description of the automation. - enabled: Whether the automation is enabled. - scope: The scope of the automation. - event: The event that triggers the automation. - action: The action that is triggered by the automation.

Returns: The saved Automation.

Examples: Create a new automation named “my-automation” that sends a Slack notification when a run within a specific project logs a metric exceeding a custom threshold:

     import wandb
     from wandb.automations import OnRunMetric, RunEvent, SendNotification

     api = wandb.Api()

     project = api.project("my-project", entity="my-team")

     # Use the first Slack integration for the team
     slack_hook = next(api.slack_integrations(entity="my-team"))

     event = OnRunMetric(
         scope=project,
         filter=RunEvent.metric("custom-metric") > 10,
     )
     action = SendNotification.from_integration(slack_hook)

     automation = api.create_automation(
         event >> action,
         name="my-automation",
         description="Send a Slack message whenever 'custom-metric' exceeds 10.",
     )
    ``` 

---

### <kbd>method</kbd> `Api.create_project`

```python
create_project(name: str, entity: str)  None

Create a new project.

Args:

  • name: The name of the new project.
  • entity: The entity of the new project.

method Api.create_registry

create_registry(
    name: str,
    visibility: Literal['organization', 'restricted'],
    organization: Optional[str] = None,
    description: Optional[str] = None,
    artifact_types: Optional[List[str]] = None
)  Registry

Create a new registry.

Args:

  • name: The name of the registry. Name must be unique within the organization.
  • visibility: The visibility of the registry.
  • organization: Anyone in the organization can view this registry. You can edit their roles later from the settings in the UI.
  • restricted: Only invited members via the UI can access this registry. Public sharing is disabled.
  • organization: The organization of the registry. If no organization is set in the settings, the organization will be fetched from the entity if the entity only belongs to one organization.
  • description: The description of the registry.
  • artifact_types: The accepted artifact types of the registry. A type is no
  • more than 128 characters and do not include characters /or ``:. If not specified, all types are accepted. Allowed types added to the registry cannot be removed later.

Returns: A registry object.

Examples:

   import wandb

   api = wandb.Api()
   registry = api.create_registry(
       name="my-registry",
       visibility="restricted",
       organization="my-org",
       description="This is a test registry",
       artifact_types=["model"],
   )
   ``` 

---

### <kbd>method</kbd> `Api.create_run`

```python
create_run(
   run_id: Optional[str] = None,
   project: Optional[str] = None,
   entity: Optional[str] = None
)  public.Run

Create a new run.

Args:

  • run_id: The ID to assign to the run. If not specified, W&B creates a random ID.
  • project: The project where to log the run to. If no project is specified, log the run to a project called “Uncategorized”.
  • entity: The entity that owns the project. If no entity is specified, log the run to the default entity.

Returns: The newly created Run.


method Api.create_run_queue

create_run_queue(
    name: str,
    type: 'public.RunQueueResourceType',
    entity: Optional[str] = None,
    prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None,
    config: Optional[dict] = None,
    template_variables: Optional[dict] = None
)  public.RunQueue

Create a new run queue in W&B Launch.

Args:

  • name: Name of the queue to create
  • type: Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”,“sagemaker”, or “gcp-vertex”.
  • entity: Name of the entity to create the queue. If None, use the configured or default entity.
  • prioritization_mode: Version of prioritization to use. Either “V0” or None.
  • config: Default resource configuration to be used for the queue. Use handlebars (eg. {{var}}) to specify template variables.
  • template_variables: A dictionary of template variable schemas to use with the config.

Returns: The newly created RunQueue.

Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors


method Api.create_team

create_team(team: str, admin_username: Optional[str] = None)  public.Team

Create a new team.

Args:

  • team: The name of the team
  • admin_username: Username of the admin user of the team. Defaults to the current user.

Returns: A Team object.


method Api.create_user

create_user(email: str, admin: Optional[bool] = False)

Create a new user.

Args:

  • email: The email address of the user.
  • admin: Set user as a global instance administrator.

Returns: A User object.


method Api.delete_automation

delete_automation(obj: Union[ForwardRef('Automation'), str])  Literal[True]

Delete an automation.

Args:

  • obj: The automation to delete, or its ID.

Returns: True if the automation was deleted successfully.


method Api.flush

flush()

Flush the local cache.

The api object keeps a local cache of runs, so if the state of the run may change while executing your script you must clear the local cache with api.flush() to get the latest values associated with the run.


method Api.from_path

from_path(path: str)

Return a run, sweep, project or report from a path.

Args:

  • path: The path to the project, run, sweep or report

Returns: A Project, Run, Sweep, or BetaReport instance.

Raises: wandb.Error if path is invalid or the object doesn’t exist.

Examples: In the proceeding code snippets “project”, “team”, “run_id”, “sweep_id”, and “report_name” are placeholders for the project, team, run ID, sweep ID, and the name of a specific report, respectively.

import wandb

api = wandb.Api()

project = api.from_path("project")
team_project = api.from_path("team/project")
run = api.from_path("team/project/runs/run_id")
sweep = api.from_path("team/project/sweeps/sweep_id")
report = api.from_path("team/project/reports/report_name")

method Api.integrations

integrations(
    entity: Optional[str] = None,
    per_page: int = 50
)  Iterator[ForwardRef('Integration')]

Return an iterator of all integrations for an entity.

Args:

  • entity: The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used.
  • per_page: Number of integrations to fetch per page. Defaults to 50. Usually there is no reason to change this.

Yields:

  • Iterator[SlackIntegration | WebhookIntegration]: An iterator of any supported integrations.

method Api.job

job(name: Optional[str], path: Optional[str] = None)  public.Job

Return a Job object.

Args:

  • name: The name of the job.
  • path: The root path to download the job artifact.

Returns: A Job object.


method Api.list_jobs

list_jobs(entity: str, project: str)  List[Dict[str, Any]]

Return a list of jobs, if any, for the given entity and project.

Args:

  • entity: The entity for the listed jobs.
  • project: The project for the listed jobs.

Returns: A list of matching jobs.


method Api.project

project(name: str, entity: Optional[str] = None)  public.Project

Return the Project with the given name (and entity, if given).

Args:

  • name: The project name.
  • entity: Name of the entity requested. If None, will fall back to the default entity passed to Api. If no default entity, will raise a ValueError.

Returns: A Project object.


method Api.projects

projects(entity: Optional[str] = None, per_page: int = 200)  public.Projects

Get projects for a given entity.

Args:

  • entity: Name of the entity requested. If None, will fall back to the default entity passed to Api. If no default entity, will raise a ValueError.
  • per_page: Sets the page size for query pagination. If set to None, use the default size. Usually there is no reason to change this.

Returns: A Projects object which is an iterable collection of Projectobjects.


method Api.queued_run

queued_run(
    entity: str,
    project: str,
    queue_name: str,
    run_queue_item_id: str,
    project_queue=None,
    priority=None
)

Return a single queued run based on the path.

Parses paths of the form entity/project/queue_id/run_queue_item_id.


method Api.registries

registries(
    organization: Optional[str] = None,
    filter: Optional[Dict[str, Any]] = None
)  Registries

Returns a Registry iterator.

Use the iterator to search and filter registries, collections, or artifact versions across your organization’s registry.

Examples: Find all registries with the names that contain “model” ```python import wandb

 api = wandb.Api()  # specify an org if your entity belongs to multiple orgs
 api.registries(filter={"name": {"$regex": "model"}})
``` 

Find all collections in the registries with the name “my_collection” and the tag “my_tag” python api.registries().collections(filter={"name": "my_collection", "tag": "my_tag"})

Find all artifact versions in the registries with a collection name that contains “my_collection” and a version that has the alias “best” python api.registries().collections( filter={"name": {"$regex": "my_collection"}} ).versions(filter={"alias": "best"})

Find all artifact versions in the registries that contain “model” and have the tag “prod” or alias “best” python api.registries(filter={"name": {"$regex": "model"}}).versions( filter={"$or": [{"tag": "prod"}, {"alias": "best"}]} )

Args:

  • organization: (str, optional) The organization of the registry to fetch. If not specified, use the organization specified in the user’s settings.
  • filter: (dict, optional) MongoDB-style filter to apply to each object in the registry iterator. Fields available to filter for collections are name, description, created_at, updated_at. Fields available to filter for collections are name, tag, description, created_at, updated_at Fields available to filter for versions are tag, alias, created_at, updated_at, metadata

Returns: A registry iterator.


method Api.registry

registry(name: str, organization: Optional[str] = None)  Registry

Return a registry given a registry name.

Args:

  • name: The name of the registry. This is without the wandb-registry- prefix.
  • organization: The organization of the registry. If no organization is set in the settings, the organization will be fetched from the entity if the entity only belongs to one organization.

Returns: A registry object.

Examples: Fetch and update a registry ```python import wandb

api = wandb.Api()
registry = api.registry(name="my-registry", organization="my-org")
registry.description = "This is an updated description"
registry.save()
``` 

method Api.reports

reports(
    path: str = '',
    name: Optional[str] = None,
    per_page: int = 50
)  public.Reports

Get reports for a given project path.

Note: wandb.Api.reports() API is in beta and will likely change in future releases.

Args:

  • path: The path to project the report resides in. Specify the entity that created the project as a prefix followed by a forward slash.
  • name: Name of the report requested.
  • per_page: Sets the page size for query pagination. If set to None, use the default size. Usually there is no reason to change this.

Returns: A Reports object which is an iterable collection of BetaReport objects.

Examples:

import wandb

wandb.Api.reports("entity/project")

method Api.run

run(path='')

Return a single run by parsing path in the form entity/project/run_id.

Args:

  • path: Path to run in the form entity/project/run_id. If api.entity is set, this can be in the form project/run_id and if api.project is set this can just be the run_id.

Returns: A Run object.


method Api.run_queue

run_queue(entity: str, name: str)

Return the named RunQueue for entity.

See Api.create_run_queue for more information on how to create a run queue.


method Api.runs

runs(
    path: Optional[str] = None,
    filters: Optional[Dict[str, Any]] = None,
    order: str = '+created_at',
    per_page: int = 50,
    include_sweeps: bool = True
)

Return a set of runs from a project that match the filters provided.

Fields you can filter by include:

  • createdAt: The timestamp when the run was created. (in ISO 8601 format, e.g. “2023-01-01T12:00:00Z”)
  • displayName: The human-readable display name of the run. (e.g. “eager-fox-1”)
  • duration: The total runtime of the run in seconds.
  • group: The group name used to organize related runs together.
  • host: The hostname where the run was executed.
  • jobType: The type of job or purpose of the run.
  • name: The unique identifier of the run. (e.g. “a1b2cdef”)
  • state: The current state of the run.
  • tags: The tags associated with the run.
  • username: The username of the user who initiated the run

Additionally, you can filter by items in the run config or summary metrics. Such as config.experiment_name, summary_metrics.loss, etc.

For more complex filtering, you can use MongoDB query operators. For details, see: https://docs.mongodb.com/manual/reference/operator/query The following operations are supported:

  • $and
  • $or
  • $nor
  • $eq
  • $ne
  • $gt
  • $gte
  • $lt
  • $lte
  • $in
  • $nin
  • $exists
  • $regex

Args:

  • path: (str) path to project, should be in the form: “entity/project”
  • filters: (dict) queries for specific runs using the MongoDB query language. You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc.
  • For example: {"config.experiment_name": "foo"} would find runs with a config entry of experiment name set to “foo”
  • order: (str) Order can be created_at, heartbeat_at, config.*.value, or summary_metrics.*. If you prepend order with a + order is ascending. If you prepend order with a - order is descending (default). The default order is run.created_at from oldest to newest.
  • per_page: (int) Sets the page size for query pagination.
  • include_sweeps: (bool) Whether to include the sweep runs in the results.

Returns: A Runs object, which is an iterable collection of Run objects.

Examples:

# Find runs in project where config.experiment_name has been set to "foo"
api.runs(path="my_entity/project", filters={"config.experiment_name": "foo"})
# Find runs in project where config.experiment_name has been set to "foo" or "bar"
api.runs(
    path="my_entity/project",
    filters={
         "$or": [
             {"config.experiment_name": "foo"},
             {"config.experiment_name": "bar"},
         ]
    },
)
# Find runs in project where config.experiment_name matches a regex
# (anchors are not supported)
api.runs(
    path="my_entity/project",
    filters={"config.experiment_name": {"$regex": "b.*"}},
)
# Find runs in project where the run name matches a regex
# (anchors are not supported)
api.runs(
    path="my_entity/project", filters={"display_name": {"$regex": "^foo.*"}}
)
# Find runs in project sorted by ascending loss
api.runs(path="my_entity/project", order="+summary_metrics.loss")

method Api.slack_integrations

slack_integrations(
    entity: Optional[str] = None,
    per_page: int = 50
)  Iterator[ForwardRef('SlackIntegration')]

Returns an iterator of Slack integrations for an entity.

Args:

  • entity: The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used.
  • per_page: Number of integrations to fetch per page. Defaults to 50. Usually there is no reason to change this.

Yields:

  • Iterator[SlackIntegration]: An iterator of Slack integrations.

Examples: Get all registered Slack integrations for the team “my-team”: ```python import wandb

api = wandb.Api()
slack_integrations = api.slack_integrations(entity="my-team")
``` 

Find only Slack integrations that post to channel names starting with “team-alerts-”: python slack_integrations = api.slack_integrations(entity="my-team") team_alert_integrations = [ ig for ig in slack_integrations if ig.channel_name.startswith("team-alerts-") ]


method Api.sweep

sweep(path='')

Return a sweep by parsing path in the form entity/project/sweep_id.

Args:

  • path: Path to sweep in the form entity/project/sweep_id. If api.entity is set, this can be in the form project/sweep_id and if api.project is set this can just be the sweep_id.

Returns: A Sweep object.


method Api.sync_tensorboard

sync_tensorboard(root_dir, run_id=None, project=None, entity=None)

Sync a local directory containing tfevent files to wandb.


method Api.team

team(team: str)  public.Team

Return the matching Team with the given name.

Args:

  • team: The name of the team.

Returns: A Team object.


method Api.update_automation

update_automation(
    obj: 'Automation',
    create_missing: bool = False,
    **kwargs: typing_extensions.Unpack[ForwardRef('WriteAutomationsKwargs')]
)  Automation

Update an existing automation.

Args:

  • obj: The automation to update. Must be an existing automation. create_missing (bool): If True, and the automation does not exist, create it. **kwargs: Any additional values to assign to the automation before updating it. If given, these will override any values that may already be set on the automation: - name: The name of the automation. - description: The description of the automation. - enabled: Whether the automation is enabled. - scope: The scope of the automation. - event: The event that triggers the automation. - action: The action that is triggered by the automation.

Returns: The updated automation.

Examples: Disable and edit the description of an existing automation (“my-automation”):

    import wandb

    api = wandb.Api()

    automation = api.automation(name="my-automation")
    automation.enabled = False
    automation.description = "Kept for reference, but no longer used."

    updated_automation = api.update_automation(automation)
    ``` 

OR: 

```python
    import wandb

    api = wandb.Api()

    automation = api.automation(name="my-automation")

    updated_automation = api.update_automation(
         automation,
         enabled=False,
         description="Kept for reference, but no longer used.",
    )
    ``` 

---

### <kbd>method</kbd> `Api.upsert_run_queue`

```python
upsert_run_queue(
    name: str,
    resource_config: dict,
    resource_type: 'public.RunQueueResourceType',
    entity: Optional[str] = None,
    template_variables: Optional[dict] = None,
    external_links: Optional[dict] = None,
    prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None
)

Upsert a run queue in W&B Launch.

Args:

  • name: Name of the queue to create
  • entity: Optional name of the entity to create the queue. If None, use the configured or default entity.
  • resource_config: Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}}) to specify template variables.
  • resource_type: Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.
  • template_variables: A dictionary of template variable schemas to be used with the config.
  • external_links: Optional dictionary of external links to be used with the queue.
  • prioritization_mode: Optional version of prioritization to use. Either “V0” or None

Returns: The upserted RunQueue.

Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors


method Api.user

user(username_or_email: str)  Optional[ForwardRef('public.User')]

Return a user from a username or email address.

This function only works for local administrators. Use api.viewer to get your own user object.

Args:

  • username_or_email: The username or email address of the user.

Returns: A User object or None if a user is not found.


method Api.users

users(username_or_email: str)  List[ForwardRef('public.User')]

Return all users from a partial username or email address query.

This function only works for local administrators. Use api.viewer to get your own user object.

Args:

  • username_or_email: The prefix or suffix of the user you want to find.

Returns: An array of User objects.


method Api.webhook_integrations

webhook_integrations(
    entity: Optional[str] = None,
    per_page: int = 50
)  Iterator[ForwardRef('WebhookIntegration')]

Returns an iterator of webhook integrations for an entity.

Args:

  • entity: The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used.
  • per_page: Number of integrations to fetch per page. Defaults to 50. Usually there is no reason to change this.

Yields:

  • Iterator[WebhookIntegration]: An iterator of webhook integrations.

Examples: Get all registered webhook integrations for the team “my-team”: ```python import wandb

api = wandb.Api()
webhook_integrations = api.webhook_integrations(entity="my-team")
``` 

Find only webhook integrations that post requests to “https://my-fake-url.com”: python webhook_integrations = api.webhook_integrations(entity="my-team") my_webhooks = [ ig for ig in webhook_integrations if ig.url_endpoint.startswith("https://my-fake-url.com") ]

5.2.2 - artifacts

module wandb.apis.public

W&B Public API for Artifact objects.

This module provides classes for interacting with W&B artifacts and their collections.

function server_supports_artifact_collections_gql_edges

server_supports_artifact_collections_gql_edges(
    client: 'RetryingClient',
    warn: 'bool' = False
)  bool

Check if W&B server supports GraphQL edges for artifact collections.


class ArtifactTypes

method ArtifactTypes.__init__

__init__(client: 'Client', entity: 'str', project: 'str', per_page: 'int' = 50)

property ArtifactTypes.cursor

Returns the cursor for the next page of results.


property ArtifactTypes.length

Returns None.


property ArtifactTypes.more

Returns whether there are more artifact types to fetch.


method ArtifactTypes.convert_objects

convert_objects()  list[ArtifactType]

Convert the raw response data into a list of ArtifactType objects.


method ArtifactTypes.update_variables

update_variables()  None

Update the cursor variable for pagination.


class ArtifactType

An artifact object that satisfies query based on the specified type.

Args:

  • client: The client instance to use for querying W&B.
  • entity: The entity (user or team) that owns the project.
  • project: The name of the project to query for artifact types.
  • type_name: The name of the artifact type.
  • attrs: Optional mapping of attributes to initialize the artifact type. If not provided, the object will load its attributes from W&B upon initialization.

method ArtifactType.__init__

__init__(
    client: 'Client',
    entity: 'str',
    project: 'str',
    type_name: 'str',
    attrs: 'Mapping[str, Any] | None' = None
)

property ArtifactType.id

The unique identifier of the artifact type.


property ArtifactType.name

The name of the artifact type.


method ArtifactType.collection

collection(name: 'str')  ArtifactCollection

Get a specific artifact collection by name.

Args:

  • name (str): The name of the artifact collection to retrieve.

method ArtifactType.collections

collections(per_page: 'int' = 50)  ArtifactCollections

Get all artifact collections associated with this artifact type.

Args:

  • per_page (int): The number of artifact collections to fetch per page. Default is 50.

method ArtifactType.load

load()  Mapping[str, Any]

Load the artifact type attributes from W&B.


class ArtifactCollections

Artifact collections of a specific type in a project.

Args:

  • client: The client instance to use for querying W&B.
  • entity: The entity (user or team) that owns the project.
  • project: The name of the project to query for artifact collections.
  • type_name: The name of the artifact type for which to fetch collections.
  • per_page: The number of artifact collections to fetch per page. Default is 50.

method ArtifactCollections.__init__

__init__(
    client: 'Client',
    entity: 'str',
    project: 'str',
    type_name: 'str',
    per_page: 'int' = 50
)

property ArtifactCollections.cursor

Returns the cursor for the next page of results.


property ArtifactCollections.length


property ArtifactCollections.more

Returns whether there are more artifacts to fetch.


method ArtifactCollections.convert_objects

convert_objects()  list[ArtifactCollection]

Convert the raw response data into a list of ArtifactCollection objects.


method ArtifactCollections.update_variables

update_variables()  None

Update the cursor variable for pagination.


class ArtifactCollection

An artifact collection that represents a group of related artifacts.

Args:

  • client: The client instance to use for querying W&B.
  • entity: The entity (user or team) that owns the project.
  • project: The name of the project to query for artifact collections.
  • name: The name of the artifact collection.
  • type: The type of the artifact collection (e.g., “dataset”, “model”).
  • organization: Optional organization name if applicable.
  • attrs: Optional mapping of attributes to initialize the artifact collection. If not provided, the object will load its attributes from W&B upon initialization.

method ArtifactCollection.__init__

__init__(
    client: 'Client',
    entity: 'str',
    project: 'str',
    name: 'str',
    type: 'str',
    organization: 'str | None' = None,
    attrs: 'Mapping[str, Any] | None' = None,
    is_sequence: 'bool | None' = None
)

property ArtifactCollection.aliases

Artifact Collection Aliases.


property ArtifactCollection.created_at

The creation date of the artifact collection.


property ArtifactCollection.description

A description of the artifact collection.


property ArtifactCollection.id

The unique identifier of the artifact collection.


property ArtifactCollection.name

The name of the artifact collection.


property ArtifactCollection.tags

The tags associated with the artifact collection.


property ArtifactCollection.type

Returns the type of the artifact collection.


method ArtifactCollection.artifacts

artifacts(per_page: 'int' = 50)  Artifacts

Get all artifacts in the collection.


method ArtifactCollection.change_type

change_type(new_type: 'str')  None

Deprecated, change type directly with save instead.


method ArtifactCollection.delete

delete()  None

Delete the entire artifact collection.


method ArtifactCollection.is_sequence

is_sequence()  bool

Return whether the artifact collection is a sequence.


method ArtifactCollection.load

load()

Load the artifact collection attributes from W&B.


method ArtifactCollection.save

save()  None

Persist any changes made to the artifact collection.


class Artifacts

An iterable collection of artifact versions associated with a project.

Optionally pass in filters to narrow down the results based on specific criteria.

Args:

  • client: The client instance to use for querying W&B.
  • entity: The entity (user or team) that owns the project.
  • project: The name of the project to query for artifacts.
  • collection_name: The name of the artifact collection to query.
  • type: The type of the artifacts to query. Common examples include “dataset” or “model”.
  • filters: Optional mapping of filters to apply to the query.
  • order: Optional string to specify the order of the results.
  • per_page: The number of artifact versions to fetch per page. Default is 50.
  • tags: Optional string or list of strings to filter artifacts by tags.

method Artifacts.__init__

__init__(
    client: 'Client',
    entity: 'str',
    project: 'str',
    collection_name: 'str',
    type: 'str',
    filters: 'Mapping[str, Any] | None' = None,
    order: 'str | None' = None,
    per_page: 'int' = 50,
    tags: 'str | list[str] | None' = None
)

property Artifacts.cursor

Returns the cursor for the next page of results.


property Artifacts.length

Returns the total number of artifacts in the collection.


property Artifacts.more

Returns whether there are more files to fetch.


method Artifacts.convert_objects

convert_objects()  list[Artifact]

Convert the raw response data into a list of wandb.Artifact objects.


class RunArtifacts

method RunArtifacts.__init__

__init__(
    client: 'Client',
    run: 'Run',
    mode: "Literal['logged', 'used']" = 'logged',
    per_page: 'int' = 50
)

property RunArtifacts.cursor

Returns the cursor for the next page of results.


property RunArtifacts.length

Returns the total number of artifacts in the collection.


property RunArtifacts.more

Returns whether there are more artifacts to fetch.


method RunArtifacts.convert_objects

convert_objects()  list[Artifact]

Convert the raw response data into a list of wandb.Artifact objects.


class ArtifactFiles

method ArtifactFiles.__init__

__init__(
    client: 'Client',
    artifact: 'Artifact',
    names: 'Sequence[str] | None' = None,
    per_page: 'int' = 50
)

property ArtifactFiles.cursor

Returns the cursor for the next page of results.


property ArtifactFiles.length

Returns the total number of files in the artifact.


property ArtifactFiles.more

Returns whether there are more files to fetch.


property ArtifactFiles.path

Returns the path of the artifact.


method ArtifactFiles.convert_objects

convert_objects()  list[public.File]

Convert the raw response data into a list of public.File objects.


method ArtifactFiles.update_variables

update_variables()  None

Update the variables dictionary with the cursor.

5.2.3 - automations

module wandb.apis.public

W&B Public API for Automation objects.

class Automations

An iterable collection of Automation objects.

method Automations.__init__

__init__(
    client: '_Client',
    variables: 'Mapping[str, Any]',
    per_page: 'int' = 50,
    _query: 'Document | None' = None
)

property Automations.cursor

The start cursor to use for the next page.


property Automations.more

Whether there are more items to fetch.


method Automations.convert_objects

convert_objects()  Iterable[Automation]

Parse the page data into a list of objects.

5.2.4 - files

module wandb.apis.public

W&B Public API for File objects.

This module provides classes for interacting with files stored in W&B.

Example:

from wandb.apis.public import Api

# Initialize API
api = Api()

# Get files from a specific run
run = api.run("entity/project/run_id")
files = run.files()

# Work with files
for file in files:
    print(f"File: {file.name}")
    print(f"Size: {file.size} bytes")
    print(f"Type: {file.mimetype}")

    # Download file
    if file.size < 1000000:  # Less than 1MB
        file.download(root="./downloads")

    # Get S3 URI for large files
    if file.size >= 1000000:
        print(f"S3 URI: {file.path_uri}")

Note:

This module is part of the W&B Public API and provides methods to access, download, and manage files stored in W&B. Files are typically associated with specific runs and can include model weights, datasets, visualizations, and other artifacts.

class Files

An iterable collection of File objects.

Access and manage files uploaded to W&B during a run. Handles pagination automatically when iterating through large collections of files.

Args:

  • client: The run object that contains the files
  • run: The run object that contains the files
  • names (list, optional): A list of file names to filter the files
  • per_page (int, optional): The number of files to fetch per page
  • upload (bool, optional): If True, fetch the upload URL for each file

Example:

from wandb.apis.public.files import Files
from wandb.apis.public.api import Api

# Initialize the API client
api = Api()

# Example run object
run = api.run("entity/project/run-id")

# Create a Files object to iterate over files in the run
files = Files(api.client, run)

# Iterate over files
for file in files:
   print(file.name)
   print(file.url)
   print(file.size)

   # Download the file
   file.download(root="download_directory", replace=True)

method Files.__init__

__init__(client, run, names=None, per_page=50, upload=False)

property Files.cursor

Returns the cursor position for pagination of file results.


property Files.length

The number of files saved to the specified run.


property Files.more

Returns whether there are more files to fetch.


method Files.convert_objects

convert_objects()

Converts GraphQL edges to File objects.


method Files.update_variables

update_variables()

Updates the GraphQL query variables for pagination.


class File

File saved to W&B.

Represents a single file stored in W&B. Includes access to file metadata. Files are associated with a specific run and can include text files, model weights, datasets, visualizations, and other artifacts. You can download the file, delete the file, and access file properties.

Specify one or more attributes in a dictionary to fine a specific file logged to a specific run. You can search using the following keys:

  • id (str): The ID of the run that contains the file
  • name (str): Name of the file
  • url (str): path to file
  • direct_url (str): path to file in the bucket
  • sizeBytes (int): size of file in bytes
  • md5 (str): md5 of file
  • mimetype (str): mimetype of file
  • updated_at (str): timestamp of last update
  • path_uri (str): path to file in the bucket, currently only available for files stored in S3

Args:

  • client: The run object that contains the file
  • attrs (dict): A dictionary of attributes that define the file
  • run: The run object that contains the file

Example:

from wandb.apis.public.files import File
from wandb.apis.public.api import Api

# Initialize the API client
api = Api()

# Example attributes dictionary
file_attrs = {
   "id": "file-id",
   "name": "example_file.txt",
   "url": "https://example.com/file",
   "direct_url": "https://example.com/direct_file",
   "sizeBytes": 1024,
   "mimetype": "text/plain",
   "updated_at": "2025-03-25T21:43:51Z",
   "md5": "d41d8cd98f00b204e9800998ecf8427e",
}

# Example run object
run = api.run("entity/project/run-id")

# Create a File object
file = File(api.client, file_attrs, run)

# Access some of the attributes
print("File ID:", file.id)
print("File Name:", file.name)
print("File URL:", file.url)
print("File MIME Type:", file.mimetype)
print("File Updated At:", file.updated_at)

# Access File properties
print("File Size:", file.size)
print("File Path URI:", file.path_uri)

# Download the file
file.download(root="download_directory", replace=True)

# Delete the file
file.delete()

method File.__init__

__init__(client, attrs, run=None)

property File.path_uri

Returns the URI path to the file in the storage bucket.


property File.size

Returns the size of the file in bytes.


method File.delete

delete()

Delete the file from the W&B server.


method File.download

download(
    root: str = '.',
    replace: bool = False,
    exist_ok: bool = False,
    api: Optional[wandb.apis.public.api.Api] = None
)  TextIOWrapper

Downloads a file previously saved by a run from the wandb server.

Args:

  • root: Local directory to save the file. Defaults to “.”.
  • replace: If True, download will overwrite a local file if it exists. Defaults to False.
  • exist_ok: If True, will not raise ValueError if file already exists and will not re-download unless replace=True. Defaults to False.
  • api: If specified, the Api instance used to download the file.

Raises: ValueError if file already exists, replace=False and exist_ok=False.

5.2.5 - history

module wandb.apis.public

W&B Public API for Run History.

This module provides classes for efficiently scanning and sampling run history data.

Note:

This module is part of the W&B Public API and provides methods to access run history data. It handles pagination automatically and offers both complete and sampled access to metrics logged during training runs.


class HistoryScan

Iterator for scanning complete run history.

Args:

  • client: (wandb.apis.internal.Api) The client instance to use
  • run: (wandb.sdk.internal.Run) The run object to scan history for
  • min_step: (int) The minimum step to start scanning from
  • max_step: (int) The maximum step to scan up to
  • page_size: (int) Number of samples per page (default is 1000)

method HistoryScan.__init__

__init__(client, run, min_step, max_step, page_size=1000)

class SampledHistoryScan

Iterator for sampling run history data.

Args:

  • client: (wandb.apis.internal.Api) The client instance to use
  • run: (wandb.sdk.internal.Run) The run object to sample history from
  • keys: (list) List of keys to sample from the history
  • min_step: (int) The minimum step to start sampling from
  • max_step: (int) The maximum step to sample up to
  • page_size: (int) Number of samples per page (default is 1000)

method SampledHistoryScan.__init__

__init__(client, run, keys, min_step, max_step, page_size=1000)

5.2.6 - integrations

module wandb.apis.public

W&B Public API for integrations.

This module provides classes for interacting with W&B integrations.

class Integrations

method Integrations.__init__

__init__(client: '_Client', variables: 'dict[str, Any]', per_page: 'int' = 50)

property Integrations.cursor

The start cursor to use for the next page.


property Integrations.more

Whether there are more Integrations to fetch.


method Integrations.convert_objects

convert_objects()  Iterable[Integration]

Parse the page data into a list of integrations.


class WebhookIntegrations

method WebhookIntegrations.__init__

__init__(client: '_Client', variables: 'dict[str, Any]', per_page: 'int' = 50)

property WebhookIntegrations.cursor

The start cursor to use for the next page.


property WebhookIntegrations.more

Whether there are more webhook integrations to fetch.


method WebhookIntegrations.convert_objects

convert_objects()  Iterable[WebhookIntegration]

Parse the page data into a list of webhook integrations.


class SlackIntegrations

method SlackIntegrations.__init__

__init__(client: '_Client', variables: 'dict[str, Any]', per_page: 'int' = 50)

property SlackIntegrations.cursor

The start cursor to use for the next page.


property SlackIntegrations.more

Whether there are more Slack integrations to fetch.


method SlackIntegrations.convert_objects

convert_objects()  Iterable[SlackIntegration]

Parse the page data into a list of Slack integrations.

5.2.7 - jobs

module wandb.apis.public

W&B Public API for management Launch Jobs and Launch Queues.

This module provides classes for managing W&B jobs, queued runs, and run queues.

class Job

method Job.__init__

__init__(api: 'Api', name, path: Optional[str] = None)  None

property Job.name

The name of the job.


method Job.call

call(
    config,
    project=None,
    entity=None,
    queue=None,
    resource='local-container',
    resource_args=None,
    template_variables=None,
    project_queue=None,
    priority=None
)

Call the job with the given configuration.

Args:

  • config (dict): The configuration to pass to the job. This should be a dictionary containing key-value pairs that match the input types defined in the job.
  • project (str, optional): The project to log the run to. Defaults to the job’s project.
  • entity (str, optional): The entity to log the run under. Defaults to the job’s entity.
  • queue (str, optional): The name of the queue to enqueue the job to. Defaults to None.
  • resource (str, optional): The resource type to use for execution. Defaults to “local-container”.
  • resource_args (dict, optional): Additional arguments for the resource type. Defaults to None.
  • template_variables (dict, optional): Template variables to use for the job. Defaults to None.
  • project_queue (str, optional): The project that manages the queue. Defaults to None.
  • priority (int, optional): The priority of the queued run. Defaults to None.

method Job.set_entrypoint

set_entrypoint(entrypoint: List[str])

Set the entrypoint for the job.


class QueuedRun

A single queued run associated with an entity and project.

Args:

  • entity: The entity associated with the queued run.
  • project (str): The project where runs executed by the queue are logged to.
  • queue_name (str): The name of the queue.
  • run_queue_item_id (int): The id of the run queue item.
  • project_queue (str): The project that manages the queue.
  • priority (str): The priority of the queued run.

Call run = queued_run.wait_until_running() or run = queued_run.wait_until_finished() to access the run.

method QueuedRun.__init__

__init__(
    client,
    entity,
    project,
    queue_name,
    run_queue_item_id,
    project_queue='model-registry',
    priority=None
)

property QueuedRun.entity

The entity associated with the queued run.


property QueuedRun.id

The id of the queued run.


property QueuedRun.project

The project associated with the queued run.


property QueuedRun.queue_name

The name of the queue.


property QueuedRun.state

The state of the queued run.


method QueuedRun.delete

delete(delete_artifacts=False)

Delete the given queued run from the wandb backend.


method QueuedRun.wait_until_finished

wait_until_finished()

Wait for the queued run to complete and return the finished run.


method QueuedRun.wait_until_running

wait_until_running()

Wait until the queued run is running and return the run.


class RunQueue

Class that represents a run queue in W&B.

Args:

  • client: W&B API client instance.
  • name: Name of the run queue
  • entity: The entity (user or team) that owns this queue
  • prioritization_mode: Queue priority mode Can be “DISABLED” or “V0”. Defaults to None.
  • _access: Access level for the queue Can be “project” or “user”. Defaults to None.
  • _default_resource_config_id: ID of default resource config
  • _default_resource_config: Default resource configuration

method RunQueue.__init__

__init__(
    client: 'RetryingClient',
    name: str,
    entity: str,
    prioritization_mode: Optional[Literal['DISABLED', 'V0']] = None,
    _access: Optional[Literal['project', 'user']] = None,
    _default_resource_config_id: Optional[int] = None,
    _default_resource_config: Optional[dict] = None
)  None

property RunQueue.access

The access level of the queue.


property RunQueue.default_resource_config

The default configuration for resources.


property RunQueue.entity

The entity that owns the queue.


External resource links for the queue.


property RunQueue.id

The id of the queue.


property RunQueue.items

Up to the first 100 queued runs. Modifying this list will not modify the queue or any enqueued items!


property RunQueue.name

The name of the queue.


property RunQueue.prioritization_mode

The prioritization mode of the queue.

Can be set to “DISABLED” or “V0”.


property RunQueue.template_variables

Variables for resource templates.


property RunQueue.type

The resource type for execution.


classmethod RunQueue.create

create(
    name: str,
    resource: 'RunQueueResourceType',
    entity: Optional[str] = None,
    prioritization_mode: Optional[ForwardRef('RunQueuePrioritizationMode')] = None,
    config: Optional[dict] = None,
    template_variables: Optional[dict] = None
)  RunQueue

Create a RunQueue.

Args:

  • name: The name of the run queue to create.
  • resource: The resource type for execution.
  • entity: The entity (user or team) that will own the queue. Defaults to the default entity of the API client.
  • prioritization_mode: The prioritization mode for the queue. Can be “DISABLED” or “V0”. Defaults to None.
  • config: Optional dictionary for the default resource configuration. Defaults to None.
  • template_variables: Optional dictionary for template variables used in the resource configuration.

method RunQueue.delete

delete()

Delete the run queue from the wandb backend.

5.2.8 - projects

module wandb.apis.public

W&B Public API for Project objects.

This module provides classes for interacting with W&B projects and their associated data.

Example:

from wandb.apis.public import Api

# Initialize API
api = Api()

# Get all projects for an entity
projects = api.projects("entity")

# Access project data
for project in projects:
    print(f"Project: {project.name}")
    print(f"URL: {project.url}")

    # Get artifact types
    for artifact_type in project.artifacts_types():
        print(f"Artifact Type: {artifact_type.name}")

    # Get sweeps
    for sweep in project.sweeps():
        print(f"Sweep ID: {sweep.id}")
        print(f"State: {sweep.state}")

Note:

This module is part of the W&B Public API and provides methods to access and manage projects. For creating new projects, use wandb.init() with a new project name.

class Projects

An iterable collection of Project objects.

An iterable interface to access projects created and saved by the entity.

Args:

  • client (wandb.apis.internal.Api): The API client instance to use.
  • entity (str): The entity name (username or team) to fetch projects for.
  • per_page (int): Number of projects to fetch per request (default is 50).

Example:

from wandb.apis.public.api import Api

# Initialize the API client
api = Api()

# Find projects that belong to this entity
projects = api.projects(entity="entity")

# Iterate over files
for project in projects:
   print(f"Project: {project.name}")
   print(f"- URL: {project.url}")
   print(f"- Created at: {project.created_at}")
   print(f"- Is benchmark: {project.is_benchmark}")

method Projects.__init__

__init__(client, entity, per_page=50)

property Projects.cursor

Returns the cursor position for pagination of project results.


property Projects.length

Returns the total number of projects.

Note: This property is not available for projects.


property Projects.more

Returns True if there are more projects to fetch. Returns False if there are no more projects to fetch.


method Projects.convert_objects

convert_objects()

Converts GraphQL edges to File objects.


class Project

A project is a namespace for runs.

Args:

  • client: W&B API client instance.
  • name (str): The name of the project.
  • entity (str): The entity name that owns the project.

method Project.__init__

__init__(client, entity, project, attrs)

property Project.id


property Project.path

Returns the path of the project. The path is a list containing the entity and project name.


property Project.url

Returns the URL of the project.


method Project.artifacts_types

artifacts_types(per_page=50)

Returns all artifact types associated with this project.


method Project.sweeps

sweeps()

Fetches all sweeps associated with the project.


5.2.9 - query_generator

module wandb.apis.public


method QueryGenerator.filter_to_mongo

filter_to_mongo(filter)

Returns dictionary with filter format converted to MongoDB filter.


classmethod QueryGenerator.format_order_key

format_order_key(key: str)

Format a key for sorting.


method QueryGenerator.key_to_server_path

key_to_server_path(key)

Convert a key dictionary to the corresponding server path string.


method QueryGenerator.keys_to_order

keys_to_order(keys)

Convert a list of key dictionaries to an order string.


method QueryGenerator.mongo_to_filter

mongo_to_filter(filter)

Returns dictionary with MongoDB filter converted to filter format.


method QueryGenerator.order_to_keys

order_to_keys(order)

Convert an order string to a list of key dictionaries.


method QueryGenerator.server_path_to_key

server_path_to_key(path)

Convert a server path string to the corresponding key dictionary.

5.2.10 - reports

module wandb.apis.public

W&B Public API for Report objects.

This module provides classes for interacting with W&B reports and managing report-related data.


class Reports

Reports is an iterable collection of BetaReport objects.

Args:

  • client (wandb.apis.internal.Api): The API client instance to use.
  • project (wandb.sdk.internal.Project): The project to fetch reports from.
  • name (str, optional): The name of the report to filter by. If None, fetches all reports.
  • entity (str, optional): The entity name for the project. Defaults to the project entity.
  • per_page (int): Number of reports to fetch per page (default is 50).

method Reports.__init__

__init__(client, project, name=None, entity=None, per_page=50)

property Reports.cursor

Returns the cursor position for pagination of file results.


property Reports.length

The number of reports in the project.


property Reports.more

Returns whether there are more files to fetch.


method Reports.convert_objects

convert_objects()

Converts GraphQL edges to File objects.


method Reports.update_variables

update_variables()

Updates the GraphQL query variables for pagination.


class BetaReport

BetaReport is a class associated with reports created in W&B.

WARNING: this API will likely change in a future release

Attributes:

  • name (string): The name of the report.
  • description (string): Report description.
  • user (User): The user that created the report.
  • spec (dict): The spec off the report.
  • updated_at (string): timestamp of last update.

method BetaReport.__init__

__init__(client, attrs, entity=None, project=None)

property BetaReport.sections

Get the panel sections (groups) from the report.


property BetaReport.updated_at

Timestamp of last update


property BetaReport.url

URL of the report.

Contains the entity, project, display name, and id.


method BetaReport.runs

runs(section, per_page=50, only_selected=True)

Get runs associated with a section of the report.


method BetaReport.to_html

to_html(height=1024, hidden=False)

Generate HTML containing an iframe displaying this report.


5.2.11 - runs

module wandb.apis.public

W&B Public API for Runs.

This module provides classes for interacting with W&B runs and their associated data.

Example:

from wandb.apis.public import Api

# Initialize API
api = Api()

# Get runs matching filters
runs = api.runs(
    path="entity/project", filters={"state": "finished", "config.batch_size": 32}
)

# Access run data
for run in runs:
    print(f"Run: {run.name}")
    print(f"Config: {run.config}")
    print(f"Metrics: {run.summary}")

    # Get history with pandas
    history_df = run.history(keys=["loss", "accuracy"], pandas=True)

    # Work with artifacts
    for artifact in run.logged_artifacts():
        print(f"Artifact: {artifact.name}")

Note:

This module is part of the W&B Public API and provides read/write access to run data. For logging new runs, use the wandb.init() function from the main wandb package.

class Runs

An iterable collection of runs associated with a project and optional filter.

This is generally used indirectly using the Api.runs namespace.

Args:

  • client: (wandb.apis.public.RetryingClient) The API client to use for requests.
  • entity: (str) The entity (username or team) that owns the project.
  • project: (str) The name of the project to fetch runs from.
  • filters: (Optional[Dict[str, Any]]) A dictionary of filters to apply to the runs query.
  • order: (Optional[str]) The order of the runs, can be “asc” or “desc” Defaults to “desc”.
  • per_page: (int) The number of runs to fetch per request (default is 50).
  • include_sweeps: (bool) Whether to include sweep information in the runs. Defaults to True.

Examples:

from wandb.apis.public.runs import Runs
from wandb.apis.public import Api

# Initialize the API client
api = Api()

# Get all runs from a project that satisfy the filters
filters = {"state": "finished", "config.optimizer": "adam"}

runs = Runs(
   client=api.client,
   entity="entity",
   project="project_name",
   filters=filters,
)

# Iterate over runs and print details
for run in runs:
   print(f"Run name: {run.name}")
   print(f"Run ID: {run.id}")
   print(f"Run URL: {run.url}")
   print(f"Run state: {run.state}")
   print(f"Run config: {run.config}")
   print(f"Run summary: {run.summary}")
   print(f"Run history (samples=5): {run.history(samples=5)}")
   print("----------")

# Get histories for all runs with specific metrics
histories_df = runs.histories(
   samples=100,  # Number of samples per run
   keys=["loss", "accuracy"],  # Metrics to fetch
   x_axis="_step",  # X-axis metric
   format="pandas",  # Return as pandas DataFrame
)

method Runs.__init__

__init__(
    client: 'RetryingClient',
    entity: str,
    project: str,
    filters: Optional[Dict[str, Any]] = None,
    order: Optional[str] = None,
    per_page: int = 50,
    include_sweeps: bool = True
)

property Runs.cursor

Returns the cursor position for pagination of runs results.


property Runs.length

Returns the total number of runs.


property Runs.more

Returns whether there are more runs to fetch.


method Runs.convert_objects

convert_objects()

Converts GraphQL edges to Runs objects.


method Runs.histories

histories(
    samples: int = 500,
    keys: Optional[List[str]] = None,
    x_axis: str = '_step',
    format: Literal['default', 'pandas', 'polars'] = 'default',
    stream: Literal['default', 'system'] = 'default'
)

Return sampled history metrics for all runs that fit the filters conditions.

Args:

  • samples: The number of samples to return per run
  • keys: Only return metrics for specific keys
  • x_axis: Use this metric as the xAxis defaults to _step
  • format: Format to return data in, options are “default”, “pandas”, “polars”
  • stream: “default” for metrics, “system” for machine metrics

Returns:

  • pandas.DataFrame: If format="pandas", returns a pandas.DataFrame of history metrics.
  • polars.DataFrame: If format="polars", returns a polars.DataFrame of history metrics.
  • list of dicts: If format="default", returns a list of dicts containing history metrics with a run_id key.

class Run

A single run associated with an entity and project.

Args:

  • client: The W&B API client.
  • entity: The entity associated with the run.
  • project: The project associated with the run.
  • run_id: The unique identifier for the run.
  • attrs: The attributes of the run.
  • include_sweeps: Whether to include sweeps in the run.

Attributes:

  • tags ([str]): a list of tags associated with the run
  • url (str): the url of this run
  • id (str): unique identifier for the run (defaults to eight characters)
  • name (str): the name of the run
  • state (str): one of: running, finished, crashed, killed, preempting, preempted
  • config (dict): a dict of hyperparameters associated with the run
  • created_at (str): ISO timestamp when the run was started
  • system_metrics (dict): the latest system metrics recorded for the run
  • summary (dict): A mutable dict-like property that holds the current summary. Calling update will persist any changes.
  • project (str): the project associated with the run
  • entity (str): the name of the entity associated with the run
  • project_internal_id (int): the internal id of the project
  • user (str): the name of the user who created the run
  • path (str): Unique identifier [entity]/[project]/[run_id]
  • notes (str): Notes about the run
  • read_only (boolean): Whether the run is editable
  • history_keys (str): Keys of the history metrics that have been logged
  • with wandb.log({key: value})
  • metadata (str): Metadata about the run from wandb-metadata.json

method Run.__init__

__init__(
    client: 'RetryingClient',
    entity: str,
    project: str,
    run_id: str,
    attrs: Optional[Mapping] = None,
    include_sweeps: bool = True
)

Initialize a Run object.

Run is always initialized by calling api.runs() where api is an instance of wandb.Api.


method Run.delete

delete(delete_artifacts=False)

Delete the given run from the wandb backend.

Args:

  • delete_artifacts (bool, optional): Whether to delete the artifacts associated with the run.

method Run.file

file(name)

Return the path of a file with a given name in the artifact.

Args:

  • name (str): name of requested file.

Returns: A File matching the name argument.


method Run.files

files(names=None, per_page=50)

Return a file path for each file named.

Args:

  • names (list): names of the requested files, if empty returns all files
  • per_page (int): number of results per page.

Returns: A Files object, which is an iterator over File objects.


method Run.history

history(samples=500, keys=None, x_axis='_step', pandas=True, stream='default')

Return sampled history metrics for a run.

This is simpler and faster if you are ok with the history records being sampled.

Args:

  • samples : (int, optional) The number of samples to return
  • pandas : (bool, optional) Return a pandas dataframe
  • keys : (list, optional) Only return metrics for specific keys
  • x_axis : (str, optional) Use this metric as the xAxis defaults to _step
  • stream : (str, optional) “default” for metrics, “system” for machine metrics

Returns:

  • pandas.DataFrame: If pandas=True returns a pandas.DataFrame of history metrics.
  • list of dicts: If pandas=False returns a list of dicts of history metrics.

method Run.load

load(force=False)

Fetch and update run data from GraphQL database.

Ensures run data is up to date.

Args:

  • force (bool): Whether to force a refresh of the run data.

method Run.log_artifact

log_artifact(
    artifact: 'wandb.Artifact',
    aliases: Optional[Collection[str]] = None,
    tags: Optional[Collection[str]] = None
)

Declare an artifact as output of a run.

Args:

  • artifact (Artifact): An artifact returned from wandb.Api().artifact(name).
  • aliases (list, optional): Aliases to apply to this artifact.
  • tags: (list, optional) Tags to apply to this artifact, if any.

Returns: A Artifact object.


method Run.logged_artifacts

logged_artifacts(per_page: int = 100)  RunArtifacts

Fetches all artifacts logged by this run.

Retrieves all output artifacts that were logged during the run. Returns a paginated result that can be iterated over or collected into a single list.

Args:

  • per_page: Number of artifacts to fetch per API request.

Returns: An iterable collection of all Artifact objects logged as outputs during this run.

Example:

import wandb
import tempfile

with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".txt") as tmp:
   tmp.write("This is a test artifact")
   tmp_path = tmp.name
run = wandb.init(project="artifact-example")
artifact = wandb.Artifact("test_artifact", type="dataset")
artifact.add_file(tmp_path)
run.log_artifact(artifact)
run.finish()

api = wandb.Api()

finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")

for logged_artifact in finished_run.logged_artifacts():
   print(logged_artifact.name)

method Run.save

save()

Persist changes to the run object to the W&B backend.


method Run.scan_history

scan_history(keys=None, page_size=1000, min_step=None, max_step=None)

Returns an iterable collection of all history records for a run.

Args:

  • keys ([str], optional): only fetch these keys, and only fetch rows that have all of keys defined.
  • page_size (int, optional): size of pages to fetch from the api.
  • min_step (int, optional): the minimum number of pages to scan at a time.
  • max_step (int, optional): the maximum number of pages to scan at a time.

Returns: An iterable collection over history records (dict).

Example: Export all the loss values for an example run

run = api.run("entity/project-name/run-id")
history = run.scan_history(keys=["Loss"])
losses = [row["Loss"] for row in history]

method Run.to_html

to_html(height=420, hidden=False)

Generate HTML containing an iframe displaying this run.


method Run.update

update()

Persist changes to the run object to the wandb backend.


method Run.upload_file

upload_file(path, root='.')

Upload a local file to W&B, associating it with this run.

Args:

  • path (str): Path to the file to upload. Can be absolute or relative.
  • root (str): The root path to save the file relative to. For example, if you want to have the file saved in the run as “my_dir/file.txt” and you’re currently in “my_dir” you would set root to “../”. Defaults to current directory (".").

Returns: A File object representing the uploaded file.


method Run.use_artifact

use_artifact(artifact, use_as=None)

Declare an artifact as an input to a run.

Args:

  • artifact (Artifact): An artifact returned from wandb.Api().artifact(name)
  • use_as (string, optional): A string identifying how the artifact is used in the script. Used to easily differentiate artifacts used in a run, when using the beta wandb launch feature’s artifact swapping functionality.

Returns: An Artifact object.


method Run.used_artifacts

used_artifacts(per_page: int = 100)  RunArtifacts

Fetches artifacts explicitly used by this run.

Retrieves only the input artifacts that were explicitly declared as used during the run, typically via run.use_artifact(). Returns a paginated result that can be iterated over or collected into a single list.

Args:

  • per_page: Number of artifacts to fetch per API request.

Returns: An iterable collection of Artifact objects explicitly used as inputs in this run.

Example:

import wandb

run = wandb.init(project="artifact-example")
run.use_artifact("test_artifact:latest")
run.finish()

api = wandb.Api()
finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
for used_artifact in finished_run.used_artifacts():
   print(used_artifact.name)
test_artifact

method Run.wait_until_finished

wait_until_finished()

Check the state of the run until it is finished.

5.2.12 - sweeps

module wandb.apis.public

W&B Public API for Sweeps.

This module provides classes for interacting with W&B hyperparameter optimization sweeps.

Example:

from wandb.apis.public import Api

# Initialize API
api = Api()

# Get a specific sweep
sweep = api.sweep("entity/project/sweep_id")

# Access sweep properties
print(f"Sweep: {sweep.name}")
print(f"State: {sweep.state}")
print(f"Best Loss: {sweep.best_loss}")

# Get best performing run
best_run = sweep.best_run()
print(f"Best Run: {best_run.name}")
print(f"Metrics: {best_run.summary}")

Note:

This module is part of the W&B Public API and provides read-only access to sweep data. For creating and controlling sweeps, use the wandb.sweep() and wandb.agent() functions from the main wandb package.

class Sweep

The set of runs associated with the sweep.

Attributes:

  • runs (Runs): List of runs
  • id (str): Sweep ID
  • project (str): The name of the project the sweep belongs to
  • config (dict): Dictionary containing the sweep configuration
  • state (str): The state of the sweep. Can be “Finished”, “Failed”, “Crashed”, or “Running”.
  • expected_run_count (int): The number of expected runs for the sweep

method Sweep.__init__

__init__(client, entity, project, sweep_id, attrs=None)

property Sweep.config

The sweep configuration used for the sweep.


property Sweep.entity

The entity associated with the sweep.


property Sweep.expected_run_count

Return the number of expected runs in the sweep or None for infinite runs.


property Sweep.name

The name of the sweep.

If the sweep has a name, it will be returned. Otherwise, the sweep ID will be returned.


property Sweep.order

Return the order key for the sweep.


property Sweep.path

Returns the path of the project.

The path is a list containing the entity, project name, and sweep ID.


property Sweep.url

The URL of the sweep.

The sweep URL is generated from the entity, project, the term “sweeps”, and the sweep ID.run_id. For SaaS users, it takes the form of https://wandb.ai/entity/project/sweeps/sweeps_ID.


property Sweep.username

Deprecated. Use Sweep.entity instead.


method Sweep.best_run

best_run(order=None)

Return the best run sorted by the metric defined in config or the order passed in.


classmethod Sweep.get

get(
    client,
    entity=None,
    project=None,
    sid=None,
    order=None,
    query=None,
    **kwargs
)

Execute a query against the cloud backend.


method Sweep.load

load(force: bool = False)

Fetch and update sweep data logged to the run from GraphQL database.


method Sweep.to_html

to_html(height=420, hidden=False)

Generate HTML containing an iframe displaying this sweep.

5.2.13 - teams

module wandb.apis.public

W&B Public API for managing teams and team members.

This module provides classes for managing W&B teams and their members.

Note:

This module is part of the W&B Public API and provides methods to manage teams and their members. Team management operations require appropriate permissions.


class Member

A member of a team.

Args:

  • client (wandb.apis.internal.Api): The client instance to use
  • team (str): The name of the team this member belongs to
  • attrs (dict): The member attributes

method Member.__init__

__init__(client, team, attrs)

method Member.delete

delete()

Remove a member from a team.

Returns: Boolean indicating success


class Team

A class that represents a W&B team.

This class provides methods to manage W&B teams, including creating teams, inviting members, and managing service accounts. It inherits from Attrs to handle team attributes.

Args:

  • client (wandb.apis.public.Api): The api instance to use
  • name (str): The name of the team
  • attrs (dict): Optional dictionary of team attributes

Note:

Team management requires appropriate permissions.

method Team.__init__

__init__(client, name, attrs=None)

classmethod Team.create

create(api, team, admin_username=None)

Create a new team.

Args:

  • api: (Api) The api instance to use
  • team: (str) The name of the team
  • admin_username: (str) optional username of the admin user of the team, defaults to the current user.

Returns: A Team object


method Team.create_service_account

create_service_account(description)

Create a service account for the team.

Args:

  • description: (str) A description for this service account

Returns: The service account Member object, or None on failure


method Team.invite

invite(username_or_email, admin=False)

Invite a user to a team.

Args:

  • username_or_email: (str) The username or email address of the user you want to invite.
  • admin: (bool) Whether to make this user a team admin. Defaults to False.

Returns: True on success, False if user was already invited or didn’t exist.


method Team.load

load(force=False)

Return members that belong to a team.

5.2.14 - users

module wandb.apis.public

W&B Public API for managing users and API keys.

This module provides classes for managing W&B users and their API keys.

Note:

This module is part of the W&B Public API and provides methods to manage users and their authentication. Some operations require admin privileges.


class User

A class representing a W&B user with authentication and management capabilities.

This class provides methods to manage W&B users, including creating users, managing API keys, and accessing team memberships. It inherits from Attrs to handle user attributes.

Args:

  • client: (wandb.apis.internal.Api) The client instance to use
  • attrs: (dict) The user attributes

Note:

Some operations require admin privileges

method User.__init__

__init__(client, attrs)

property User.api_keys

List of API key names associated with the user.

Returns:

  • list[str]: Names of API keys associated with the user. Empty list if user has no API keys or if API key data hasn’t been loaded.

property User.teams

List of team names that the user is a member of.

Returns:

  • list (list): Names of teams the user belongs to. Empty list if user has no team memberships or if teams data hasn’t been loaded.

property User.user_api

An instance of the api using credentials from the user.


classmethod User.create

create(api, email, admin=False)

Create a new user.

Args:

  • api (Api): The api instance to use
  • email (str): The name of the team
  • admin (bool): Whether this user should be a global instance admin

Returns: A User object


method User.delete_api_key

delete_api_key(api_key)

Delete a user’s api key.

Args:

  • api_key (str): The name of the API key to delete. This should be one of the names returned by the api_keys property.

Returns: Boolean indicating success

Raises: ValueError if the api_key couldn’t be found


method User.generate_api_key

generate_api_key(description=None)

Generate a new api key.

Args:

  • description (str, optional): A description for the new API key. This can be used to identify the purpose of the API key.

Returns: The new api key, or None on failure

5.3 - Automations

Automate your W&B workflows.

5.3.1 - ActionType

class ActionType

The type of action triggered by an automation.

5.3.2 - ArtifactEvent

class ArtifactEvent

5.3.3 - Automation

class Automation

A local instance of a saved W&B automation.


property Automation.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property Automation.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.3.4 - DoNothing

class DoNothing

Defines an automation action that intentionally does nothing.


property DoNothing.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property DoNothing.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.3.5 - EventType

class EventType

The type of event that triggers an automation.

5.3.6 - MetricChangeFilter

class MetricChangeFilter

Defines a filter that compares a change in a run metric against a user-defined threshold.

The change is calculated over “tumbling” windows, i.e. the difference between the current window and the non-overlapping prior window.


property MetricChangeFilter.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property MetricChangeFilter.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.3.7 - MetricThresholdFilter

class MetricThresholdFilter

Defines a filter that compares a run metric against a user-defined threshold value.


property MetricThresholdFilter.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property MetricThresholdFilter.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.3.8 - NewAutomation

class NewAutomation

A new automation to be created.


property NewAutomation.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property NewAutomation.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


property NewAutomation.scope

The scope in which the triggering event must occur.

5.3.9 - OnAddArtifactAlias

class OnAddArtifactAlias

A new alias is assigned to an artifact.


property OnAddArtifactAlias.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property OnAddArtifactAlias.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


method OnAddArtifactAlias.then

then(action: 'InputAction')  NewAutomation

Define a new Automation in which this event triggers the given action.

5.3.10 - OnCreateArtifact

class OnCreateArtifact

A new artifact is created.


property OnCreateArtifact.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property OnCreateArtifact.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


method OnCreateArtifact.then

then(action: 'InputAction')  NewAutomation

Define a new Automation in which this event triggers the given action.

5.3.11 - OnLinkArtifact

class OnLinkArtifact

A new artifact is linked to a collection.


property OnLinkArtifact.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property OnLinkArtifact.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


method OnLinkArtifact.then

then(action: 'InputAction')  NewAutomation

Define a new Automation in which this event triggers the given action.

5.3.12 - OnRunMetric

class OnRunMetric

A run metric satisfies a user-defined condition.


property OnRunMetric.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property OnRunMetric.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


method OnRunMetric.then

then(action: 'InputAction')  NewAutomation

Define a new Automation in which this event triggers the given action.

5.3.13 - ProjectScope

class ProjectScope

An automation scope defined by a specific Project.


property ProjectScope.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property ProjectScope.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.3.14 - RunEvent

class RunEvent


method RunEvent.metric

metric(name: 'str')  MetricVal

Define a metric filter condition.

5.3.15 - ScopeType

class ScopeType

The kind of scope that triggers an automation.

5.3.16 - SendNotification

class SendNotification

Defines an automation action that sends a (Slack) notification.


property SendNotification.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property SendNotification.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


classmethod SendNotification.from_integration

from_integration(
    integration: 'SlackIntegration',
    title: 'str' = '',
    text: 'str' = '',
    level: 'AlertSeverity' = <AlertSeverity.INFO: 'INFO'>
)  Self

Define a notification action that sends to the given (Slack) integration.

5.3.17 - SendWebhook

class SendWebhook

Defines an automation action that sends a webhook request.


property SendWebhook.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property SendWebhook.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


classmethod SendWebhook.from_integration

from_integration(
    integration: 'WebhookIntegration',
    payload: 'Optional[SerializedToJson[dict[str, Any]]]' = None
)  Self

Define a webhook action that sends to the given (webhook) integration.

5.3.18 - SlackIntegration

class SlackIntegration


property SlackIntegration.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property SlackIntegration.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.3.19 - WebhookIntegration

class WebhookIntegration


property WebhookIntegration.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property WebhookIntegration.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

5.4 - SDK v(0.19.11)

Train and fine-tune models, manage models from experimentation to production. For guides and examples, see https://docs.wandb.ai.

5.4.1 - Actions

Use during training to log experiments, track metrics, and save model artifacts.

5.4.1.1 - Classes

5.4.1.1.1 - Artifact

class Artifact

Flexible and lightweight building block for dataset and model versioning.

Construct an empty W&B Artifact. Populate an artifacts contents with methods that begin with add. Once the artifact has all the desired files, you can call wandb.log_artifact() to log it.

Args:

  • name (str): A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with the use_artifact Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project.
  • type (str): The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types include dataset or model. Include model within your type string if you want to link the artifact to the W&B Model Registry. Note that some types reserved for internal use and cannot be set by users. Such types include job and types that start with wandb-.
  • description (str | None) = None: A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with the Artifact.description attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App.
  • metadata (dict[str, Any] | None) = None: Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys.
  • incremental: Use Artifact.new_draft() method instead to modify an existing artifact.
  • use_as: Deprecated.
  • is_link: Boolean indication of if the artifact is a linked artifact(True) or source artifact(False).

Returns: An Artifact object.

method Artifact.__init__

__init__(
    name: 'str',
    type: 'str',
    description: 'str | None' = None,
    metadata: 'dict[str, Any] | None' = None,
    incremental: 'bool' = False,
    use_as: 'str | None' = None
)  None

property Artifact.aliases

List of one or more semantically-friendly references or

identifying “nicknames” assigned to an artifact version.

Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.


property Artifact.collection

The collection this artifact was retrieved from.

A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.


property Artifact.commit_hash

The hash returned when this artifact was committed.


property Artifact.created_at

Timestamp when the artifact was created.


property Artifact.description

A description of the artifact.


property Artifact.digest

The logical digest of the artifact.

The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest version, then log_artifact is a no-op.


property Artifact.distributed_id


property Artifact.entity

The name of the entity that the artifact collection belongs to.

If the artifact is a link, the entity will be the entity of the linked artifact.


property Artifact.file_count

The number of files (including references).


property Artifact.history_step

The nearest step at which history metrics were logged for the source run of the artifact.

Examples:

    run = artifact.logged_by()
    if run and (artifact.history_step is not None):
        history = run.sample_history(
            min_step=artifact.history_step,
            max_step=artifact.history_step + 1,
            keys=["my_metric"],
        )
   ``` 

---

### <kbd>property</kbd> Artifact.id

The artifact's ID. 

---

### <kbd>property</kbd> Artifact.incremental





---

### <kbd>property</kbd> Artifact.is_link

Boolean flag indicating if the artifact is a link artifact. 

True: The artifact is a link artifact to a source artifact. False: The artifact is a source artifact. 

---

### <kbd>property</kbd> Artifact.linked_artifacts

Returns a list of all the linked artifacts of a source artifact. 

If the artifact is a link artifact (`artifact.is_link == True`), it will return an empty list. Limited to 500 results. 

---

### <kbd>property</kbd> Artifact.manifest

The artifact's manifest. 

The manifest lists all of its contents, and can't be changed once the artifact has been logged. 

---

### <kbd>property</kbd> Artifact.metadata

User-defined artifact metadata. 

Structured data associated with the artifact. 

---

### <kbd>property</kbd> Artifact.name

The artifact name and version of the artifact. 

A string with the format `{collection}:{alias}`. If fetched before an artifact is logged/saved, the name won't contain the alias. If the artifact is a link, the name will be the name of the linked artifact. 

---

### <kbd>property</kbd> Artifact.project

The name of the project that the artifact collection belongs to. 

If the artifact is a link, the project will be the project of the linked artifact. 

---

### <kbd>property</kbd> Artifact.qualified_name

The entity/project/name of the artifact. 

If the artifact is a link, the qualified name will be the qualified name of the linked artifact path. 

---

### <kbd>property</kbd> Artifact.size

The total size of the artifact in bytes. 

Includes any references tracked by this artifact. 

---

### <kbd>property</kbd> Artifact.source_artifact

Returns the source artifact. The source artifact is the original logged artifact. 

If the artifact itself is a source artifact (`artifact.is_link == False`), it will return itself. 

---

### <kbd>property</kbd> Artifact.source_collection

The artifact's source collection. 

The source collection is the collection that the artifact was logged from. 

---

### <kbd>property</kbd> Artifact.source_entity

The name of the entity of the source artifact. 

---

### <kbd>property</kbd> Artifact.source_name

The artifact name and version of the source artifact. 

A string with the format `{source_collection}:{alias}`. Before the artifact is saved, contains only the name since the version is not yet known. 

---

### <kbd>property</kbd> Artifact.source_project

The name of the project of the source artifact. 

---

### <kbd>property</kbd> Artifact.source_qualified_name

The source_entity/source_project/source_name of the source artifact. 

---

### <kbd>property</kbd> Artifact.source_version

The source artifact's version. 

A string with the format `v{number}`. 

---

### <kbd>property</kbd> Artifact.state

The status of the artifact. One of: "PENDING", "COMMITTED", or "DELETED". 

---

### <kbd>property</kbd> Artifact.tags

List of one or more tags assigned to this artifact version. 

---

### <kbd>property</kbd> Artifact.ttl

The time-to-live (TTL) policy of an artifact. 

Artifacts are deleted shortly after a TTL policy's duration passes. If set to `None`, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact. 



**Raises:**

- `ArtifactNotLoggedError`:  Unable to fetch inherited TTL if the artifact has not been logged or saved. 

---

### <kbd>property</kbd> Artifact.type

The artifact's type. Common types include `dataset` or `model`. 

---

### <kbd>property</kbd> Artifact.updated_at

The time when the artifact was last updated. 

---

### <kbd>property</kbd> Artifact.url

Constructs the URL of the artifact. 



**Returns:**

- `str`:  The URL of the artifact. 

---

### <kbd>property</kbd> Artifact.use_as

Deprecated. 

---

### <kbd>property</kbd> Artifact.version

The artifact's version. 

A string with the format `v{number}`. If the artifact is a link artifact, the version will be from the linked collection. 



---

### <kbd>method</kbd> `Artifact.add`

```python
add(
   obj: 'WBValue',
   name: 'StrPath',
   overwrite: 'bool' = False
)  ArtifactManifestEntry

Add wandb.WBValue obj to the artifact.

Args:

  • obj: The object to add. Currently support one of Bokeh, JoinedTable, PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D, Audio, Image, Video, Html, Object3D
  • name: The path within the artifact to add the object.
  • overwrite: If True, overwrite existing objects with the same file path if applicable.

Returns: The added manifest entry

Raises:

  • ArtifactFinalizedError: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

method Artifact.add_dir

add_dir(
    local_path: 'str',
    name: 'str | None' = None,
    skip_cache: 'bool | None' = False,
    policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
    merge: 'bool' = False
)  None

Add a local directory to the artifact.

Args:

  • local_path: The path of the local directory.
  • name: The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’s type. Defaults to the root of the artifact.
  • skip_cache: If set to True, W&B will not copy/move files to the cache while uploading
  • policy: By default, “mutable”.
    • mutable: Create a temporary copy of the file to prevent corruption during upload.
    • immutable: Disable protection, rely on the user not to delete or change the file.
  • merge: If False (default), throws ValueError if a file was already added in a previous add_dir call and its content has changed. If True, overwrites existing files with changed content. Always adds new files and never removes files. To replace an entire directory, pass a name when adding the directory using add_dir(local_path, name=my_prefix) and call remove(my_prefix) to remove the directory, then add it again.

Raises:

  • ArtifactFinalizedError: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
  • ValueError: Policy must be “mutable” or “immutable”

method Artifact.add_file

add_file(
    local_path: 'str',
    name: 'str | None' = None,
    is_tmp: 'bool | None' = False,
    skip_cache: 'bool | None' = False,
    policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
    overwrite: 'bool' = False
)  ArtifactManifestEntry

Add a local file to the artifact.

Args:

  • local_path: The path to the file being added.
  • name: The path within the artifact to use for the file being added. Defaults to the basename of the file.
  • is_tmp: If true, then the file is renamed deterministically to avoid collisions.
  • skip_cache: If True, do not copy files to the cache after uploading.
  • policy: By default, set to “mutable”. If set to “mutable”, create a temporary copy of the file to prevent corruption during upload. If set to “immutable”, disable protection and rely on the user not to delete or change the file.
  • overwrite: If True, overwrite the file if it already exists.

Returns: The added manifest entry.

Raises:

  • ArtifactFinalizedError: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
  • ValueError: Policy must be “mutable” or “immutable”

method Artifact.add_reference

add_reference(
    uri: 'ArtifactManifestEntry | str',
    name: 'StrPath | None' = None,
    checksum: 'bool' = True,
    max_objects: 'int | None' = None
)  Sequence[ArtifactManifestEntry]

Add a reference denoted by a URI to the artifact.

Unlike files or directories that you add to an artifact, references are not uploaded to W&B. For more information, see Track external files.

By default, the following schemes are supported:

  • http(s): The size and digest of the file will be inferred by the Content-Length and the ETag response headers returned by the server.
  • s3: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
  • gs: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
  • https, domain matching *.blob.core.windows.net
  • Azure: The checksum and size are be pulled from the blob metadata. If storage account versioning is enabled, then the version ID is also tracked.
  • file: The checksum and size are pulled from the file system. This scheme is useful if you have an NFS share or other externally mounted volume containing files you wish to track but not necessarily upload.

For any other scheme, the digest is just a hash of the URI and the size is left blank.

Args:

  • uri: The URI path of the reference to add. The URI path can be an object returned from Artifact.get_entry to store a reference to another artifact’s entry.
  • name: The path within the artifact to place the contents of this reference.
  • checksum: Whether or not to checksum the resource(s) located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend setting checksum=False when adding reference objects, in which case a new version will only be created if the reference URI changes.
  • max_objects: The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum.

Returns: The added manifest entries.

Raises:

  • ArtifactFinalizedError: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

method Artifact.checkout

checkout(root: 'str | None' = None)  str

Replace the specified root directory with the contents of the artifact.

WARNING: This will delete all files in root that are not included in the artifact.

Args:

  • root: The directory to replace with this artifact’s files.

Returns: The path of the checked out contents.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.delete

delete(delete_aliases: 'bool' = False)  None

Delete an artifact and its files.

If called on a linked artifact, only the link is deleted, and the source artifact is unaffected.

Use artifact.unlink() instead of artifact.delete() to remove a link between a source artifact and a linked artifact.

Args:

  • delete_aliases: If set to True, deletes all aliases associated with the artifact. Otherwise, this raises an exception if the artifact has existing aliases. This parameter is ignored if the artifact is linked (a member of a portfolio collection).

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.download

download(
    root: 'StrPath | None' = None,
    allow_missing_references: 'bool' = False,
    skip_cache: 'bool | None' = None,
    path_prefix: 'StrPath | None' = None,
    multipart: 'bool | None' = None
)  FilePathStr

Download the contents of the artifact to the specified root directory.

Existing files located within root are not modified. Explicitly delete root before you call download if you want the contents of root to exactly match the artifact.

Args:

  • root: The directory W&B stores the artifact’s files.
  • allow_missing_references: If set to True, any invalid reference paths will be ignored while downloading referenced files.
  • skip_cache: If set to True, the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory.
  • path_prefix: If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes).
  • multipart: If set to None (default), the artifact will be downloaded in parallel using multipart download if individual file size is greater than 2GB. If set to True or False, the artifact will be downloaded in parallel or serially regardless of the file size.

Returns: The path to the downloaded contents.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.file

file(root: 'str | None' = None)  StrPath

Download a single file artifact to the directory you specify with root.

Args:

  • root: The root directory to store the file. Defaults to ./artifacts/self.name/.

Returns: The full path of the downloaded file.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.
  • ValueError: If the artifact contains more than one file.

method Artifact.files

files(names: 'list[str] | None' = None, per_page: 'int' = 50)  ArtifactFiles

Iterate over all files stored in this artifact.

Args:

  • names: The filename paths relative to the root of the artifact you wish to list.
  • per_page: The number of files to return per request.

Returns: An iterator containing File objects.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.finalize

finalize()  None

Finalize the artifact version.

You cannot modify an artifact version once it is finalized because the artifact is logged as a specific artifact version. Create a new artifact version to log more data to an artifact. An artifact is automatically finalized when you log the artifact with log_artifact.


method Artifact.get

get(name: 'str')  WBValue | None

Get the WBValue object located at the artifact relative name.

Args:

  • name: The artifact relative name to retrieve.

Returns: W&B object that can be logged with wandb.log() and visualized in the W&B UI.

Raises:

  • ArtifactNotLoggedError: if the artifact isn’t logged or the run is offline.

method Artifact.get_added_local_path_name

get_added_local_path_name(local_path: 'str')  str | None

Get the artifact relative name of a file added by a local filesystem path.

Args:

  • local_path: The local path to resolve into an artifact relative name.

Returns: The artifact relative name.


method Artifact.get_entry

get_entry(name: 'StrPath')  ArtifactManifestEntry

Get the entry with the given name.

Args:

  • name: The artifact relative name to get

Returns: A W&B object.

Raises:

  • ArtifactNotLoggedError: if the artifact isn’t logged or the run is offline.
  • KeyError: if the artifact doesn’t contain an entry with the given name.

method Artifact.get_path

get_path(name: 'StrPath')  ArtifactManifestEntry

Deprecated. Use get_entry(name).


method Artifact.is_draft

is_draft()  bool

Check if artifact is not saved.

Returns: Boolean. False if artifact is saved. True if artifact is not saved.


method Artifact.json_encode

json_encode()  dict[str, Any]

Returns the artifact encoded to the JSON format.

Returns: A dict with string keys representing attributes of the artifact.


link(target_path: 'str', aliases: 'list[str] | None' = None)  Artifact | None

Link this artifact to a portfolio (a promoted collection of artifacts).

Args:

  • target_path: The path to the portfolio inside a project. The target path must adhere to one of the following schemas {portfolio}, {project}/{portfolio} or {entity}/{project}/{portfolio}. To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, set target_path to the following schema {"model-registry"}/{Registered Model Name} or {entity}/{"model-registry"}/{Registered Model Name}.
  • aliases: A list of strings that uniquely identifies the artifact inside the specified portfolio.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

Returns: The linked artifact if linking was successful, otherwise None.


method Artifact.logged_by

logged_by()  Run | None

Get the W&B run that originally logged the artifact.

Returns: The name of the W&B run that originally logged the artifact.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.new_draft

new_draft()  Artifact

Create a new draft artifact with the same content as this committed artifact.

Modifying an existing artifact creates a new artifact version known as an “incremental artifact”. The artifact returned can be extended or modified and logged as a new version.

Returns: An Artifact object.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.new_file

new_file(
    name: 'str',
    mode: 'str' = 'x',
    encoding: 'str | None' = None
)  Iterator[IO]

Open a new temporary file and add it to the artifact.

Args:

  • name: The name of the new file to add to the artifact.
  • mode: The file access mode to use to open the new file.
  • encoding: The encoding used to open the new file.

Returns: A new file object that can be written to. Upon closing, the file is automatically added to the artifact.

Raises:

  • ArtifactFinalizedError: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.

method Artifact.remove

remove(item: 'StrPath | ArtifactManifestEntry')  None

Remove an item from the artifact.

Args:

  • item: The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed.

Raises:

  • ArtifactFinalizedError: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
  • FileNotFoundError: If the item isn’t found in the artifact.

method Artifact.save

save(
    project: 'str | None' = None,
    settings: 'wandb.Settings | None' = None
)  None

Persist any changes made to the artifact.

If currently in a run, that run will log this artifact. If not currently in a run, a run of type “auto” is created to track this artifact.

Args:

  • project: A project to use for the artifact in the case that a run is not already in context.
  • settings: A settings object to use when initializing an automatic run. Most commonly used in testing harness.

unlink()  None

Unlink this artifact if it is currently a member of a promoted collection of artifacts.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.
  • ValueError: If the artifact is not linked, in other words, it is not a member of a portfolio collection.

method Artifact.used_by

used_by()  list[Run]

Get a list of the runs that have used this artifact and its linked artifacts.

Returns: A list of Run objects.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.

method Artifact.verify

verify(root: 'str | None' = None)  None

Verify that the contents of an artifact match the manifest.

All files in the directory are checksummed and the checksums are then cross-referenced against the artifact’s manifest. References are not verified.

Args:

  • root: The directory to verify. If None artifact will be downloaded to ‘./artifacts/self.name/’.

Raises:

  • ArtifactNotLoggedError: If the artifact is not logged.
  • ValueError: If the verification fails.

method Artifact.wait

wait(timeout: 'int | None' = None)  Artifact

If needed, wait for this artifact to finish logging.

Args:

  • timeout: The time, in seconds, to wait.

Returns: An Artifact object.

5.4.1.1.2 - ArtifactTTL

class ArtifactTTL

An enumeration.

5.4.1.1.3 - Error

class Error

Base W&B Error.

method Error.__init__

__init__(message, context: Optional[dict] = None)  None

5.4.1.1.4 - Run

class Run

A unit of computation logged by W&B. Typically, this is an ML experiment.

Call wandb.init() to create a new run. wandb.init() starts a new run and returns a wandb.Run object. Each run is associated with a unique ID (run ID). There is only ever at most one active wandb.Run in any process.

For distributed training experiments, you can either track each process separately using one run per process or track all processes to a single run. See Log distributed training experiments for more information.

You can log data to a run with wandb.log(). Anything you log using wandb.log() is sent to that run. See Create an experiment or wandb.init API reference page or more information.

There is a another Run object in the wandb.apis.public namespace. Use this object is to interact with runs that have already been created.

Finish active runs before starting new runs. Use a context manager (with statement) to automatically finish the run or use wandb.finish() to finish a run manually. W&B recommends using a context manager to automatically finish the run.

Attributes:

  • summary: (Summary) Single values set for each wandb.log() key. By default, summary is set to the last value logged. You can manually set summary to the best value, like max accuracy, instead of the final value.

Examples: Create a run with wandb.init():

import wandb

# Start a new run and log some data
# Use context manager (`with` statement) to automatically finish the run
with wandb.init(entity="entity", project="project") as run:
    run.log({"accuracy": acc, "loss": loss})

method Run.__init__

__init__(
    settings: 'Settings',
    config: 'dict[str, Any] | None' = None,
    sweep_config: 'dict[str, Any] | None' = None,
    launch_config: 'dict[str, Any] | None' = None
)  None

property Run.config

Config object associated with this run.


property Run.config_static

Static config object associated with this run.


property Run.dir

The directory where files associated with the run are saved.


property Run.disabled

True if the run is disabled, False otherwise.


property Run.entity

The name of the W&B entity associated with the run.

Entity can be a username or the name of a team or organization.


property Run.group

Name of the group associated with the run.

Setting a group helps the W&B UI organize runs. If you are doing a distributed training you should give all of the runs in the training the same group. If you are doing cross-validation you should give all the cross-validation folds the same group.


property Run.id

Identifier for this run.


property Run.job_type

Name of the job type associated with the run.


property Run.name

Display name of the run.

Display names are not guaranteed to be unique and may be descriptive. By default, they are randomly generated.


property Run.notes

Notes associated with the run, if there are any.

Notes can be a multiline string and can also use markdown and latex equations inside $$, like $x + 3$.


property Run.offline

True if the run is offline, False otherwise.


property Run.path

Path to the run.

Run paths include entity, project, and run ID, in the format entity/project/run_id.


property Run.project

Name of the W&B project associated with the run.


property Run.project_url

URL of the W&B project associated with the run, if there is one.

Offline runs do not have a project URL.


property Run.resumed

True if the run was resumed, False otherwise.


property Run.settings

A frozen copy of run’s Settings object.


property Run.start_time

Unix timestamp (in seconds) of when the run started.


property Run.starting_step

The first step of the run.


property Run.step

Current value of the step.

This counter is incremented by wandb.log.


property Run.sweep_id

Identifier for the sweep associated with the run, if there is one.


property Run.sweep_url

URL of the sweep associated with the run, if there is one.

Offline runs do not have a sweep URL.


property Run.tags

Tags associated with the run, if there are any.


property Run.url

The url for the W&B run, if there is one.

Offline runs will not have a url.


method Run.alert

alert(
    title: 'str',
    text: 'str',
    level: 'str | AlertLevel | None' = None,
    wait_duration: 'int | float | timedelta | None' = None
)  None

Create an alert with the given title and text.

Args:

  • title: The title of the alert, must be less than 64 characters long.
  • text: The text body of the alert.
  • level: The alert level to use, either: INFO, WARN, or ERROR.
  • wait_duration: The time to wait (in seconds) before sending another alert with this title.

method Run.define_metric

define_metric(
    name: 'str',
    step_metric: 'str | wandb_metric.Metric | None' = None,
    step_sync: 'bool | None' = None,
    hidden: 'bool | None' = None,
    summary: 'str | None' = None,
    goal: 'str | None' = None,
    overwrite: 'bool | None' = None
)  wandb_metric.Metric

Customize metrics logged with wandb.log().

Args:

  • name: The name of the metric to customize.
  • step_metric: The name of another metric to serve as the X-axis for this metric in automatically generated charts.
  • step_sync: Automatically insert the last value of step_metric into run.log() if it is not provided explicitly. Defaults to True if step_metric is specified.
  • hidden: Hide this metric from automatic plots.
  • summary: Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.
  • goal: Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.
  • overwrite: If false, then this call is merged with previous define_metric calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.

Returns: An object that represents this call but can otherwise be discarded.


method Run.display

display(height: 'int' = 420, hidden: 'bool' = False)  bool

Display this run in Jupyter.


method Run.finish

finish(exit_code: 'int | None' = None, quiet: 'bool | None' = None)  None

Finish a run and upload any remaining data.

Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.

Run States:

  • Running: Active run that is logging data and/or sending heartbeats.
  • Crashed: Run that stopped sending heartbeats unexpectedly.
  • Finished: Run completed successfully (exit_code=0) with all data synced.
  • Failed: Run completed with errors (exit_code!=0).
  • Killed: Run was forcibly stopped before it could finish.

Args:

  • exit_code: Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.
  • quiet: Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).

method Run.finish_artifact

finish_artifact(
    artifact_or_path: 'Artifact | str',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    distributed_id: 'str | None' = None
)  Artifact

Finishes a non-finalized artifact as output of a run.

Subsequent “upserts” with the same distributed ID will result in a new version.

Args:

  • artifact_or_path: A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
  • name: An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
  • type: The type of artifact to log, examples include dataset, model
  • aliases: Aliases to apply to this artifact, defaults to ["latest"]
  • distributed_id: Unique string that all distributed jobs share. If None, defaults to the run’s group name.

Returns: An Artifact object.


method Run.get_project_url

get_project_url()  str | None

This method is deprecated and will be removed in a future release. Use run.project_url instead.

URL of the W&B project associated with the run, if there is one. Offline runs do not have a project URL.


method Run.get_sweep_url

get_sweep_url()  str | None

This method is deprecated and will be removed in a future release. Use run.sweep_url instead.

The URL of the sweep associated with the run, if there is one. Offline runs do not have a sweep URL.


method Run.get_url

get_url()  str | None

This method is deprecated and will be removed in a future release. Use run.url instead.

URL of the W&B run, if there is one. Offline runs do not have a URL.


link_artifact(
    artifact: 'Artifact',
    target_path: 'str',
    aliases: 'list[str] | None' = None
)  Artifact | None

Link the given artifact to a portfolio (a promoted collection of artifacts).

Linked artifacts are visible in the UI for the specified portfolio.

Args:

  • artifact: the (public or local) artifact which will be linked
  • target_path: takes the following forms: {portfolio}, {project}/{portfolio}, or {entity}/{project}/{portfolio}
  • aliases: List[str] - optional alias(es) that will only be applied on this linked artifact inside the portfolio. The alias “latest” will always be applied to the latest version of an artifact that is linked.

Returns: The linked artifact if linking was successful, otherwise None.


link_model(
    path: 'StrPath',
    registered_model_name: 'str',
    name: 'str | None' = None,
    aliases: 'list[str] | None' = None
)  Artifact | None

Log a model artifact version and link it to a registered model in the model registry.

Linked model versions are visible in the UI for the specified registered model.

This method will:

  • Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’ of type ‘model’.
  • Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project. If not, create a new registered model with name ‘registered_model_name’.
  • Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
  • Attach aliases from ‘aliases’ list to the newly linked model artifact version.

Args:

  • path: (str) A path to the contents of this model, can be in the following forms:
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • registered_model_name: The name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run.
  • name: The name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.
  • aliases: Aliases that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.

Raises:

  • AssertionError: If registered_model_name is a path or if model artifact ’name’ is of a type that does not contain the substring ‘model’.
  • ValueError: If name has invalid special characters.

Returns: The linked artifact if linking was successful, otherwise None.

Examples:

run.link_model(
   path="/local/directory",
   registered_model_name="my_reg_model",
   name="my_model_artifact",
   aliases=["production"],
)

Invalid usage

run.link_model(
    path="/local/directory",
    registered_model_name="my_entity/my_project/my_reg_model",
    name="my_model_artifact",
    aliases=["production"],
)

run.link_model(
    path="/local/directory",
    registered_model_name="my_reg_model",
    name="my_entity/my_project/my_model_artifact",
    aliases=["production"],
)

method Run.log

log(
    data: 'dict[str, Any]',
    step: 'int | None' = None,
    commit: 'bool | None' = None
)  None

Upload run data.

Use log to log data from runs, such as scalars, images, video, histograms, plots, and tables. See Log objects and media for code snippets, best practices, and more.

Basic usage:

import wandb

with wandb.init() as run:
     run.log({"train-loss": 0.5, "accuracy": 0.9})

The previous code snippet saves the loss and accuracy to the run’s history and updates the summary values for these metrics.

Visualize logged data in a workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, such as in a Jupyter notebook, with the Public API.

Logged values don’t have to be scalars. You can log any W&B supported Data Type such as images, audio, video, and more. For example, you can use wandb.Table to log structured data. See Log tables, visualize and query data tutorial for more details.

W&B organizes metrics with a forward slash (/) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:

run.log(
     {
         "train/accuracy": 0.9,
         "train/loss": 30,
         "validate/accuracy": 0.8,
         "validate/loss": 20,
     }
)

Only one level of nesting is supported; run.log({"a/b/c": 1}) produces a section named “a/b”.

run.log is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.

By default, each call to log creates a new “step”. The step must always increase, and it is not possible to log to a previous step. You can use any metric as the X axis in charts. See Custom log axes for more details.

In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.

# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})

It is possible to use multiple log invocations to log to the same step with the step and commit parameters. The following are all equivalent:

# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})

# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})

# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)

Args:

  • data: A dict with str keys and values that are serializable
  • Python objects including: int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.
  • step: The step number to log. If None, then an implicit auto-incrementing step is used. See the notes in the description.
  • commit: If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None, then the default is commit=True; otherwise, the default is commit=False.
  • sync: This argument is deprecated and does nothing.

Examples: For more and more detailed examples, see our guides to logging.

Basic usage

import wandb

run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})

Incremental logging

import wandb

run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})

Histogram

import numpy as np
import wandb

# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})

Image from NumPy

import numpy as np
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
    image = wandb.Image(pixels, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Image from PIL

import numpy as np
from PIL import Image as PILImage
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(
         low=0,
         high=256,
         size=(100, 100, 3),
         dtype=np.uint8,
    )
    pil_image = PILImage.fromarray(pixels, mode="RGB")
    image = wandb.Image(pil_image, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Video from NumPy

import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
    low=0,
    high=256,
    size=(10, 3, 100, 100),
    dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})

Matplotlib plot

from matplotlib import pyplot as plt
import numpy as np
import wandb

run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y)  # plot y = x^2
run.log({"chart": fig})

PR Curve

import wandb

run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})

3D Object

import wandb

run = wandb.init()
run.log(
    {
         "generated_samples": [
             wandb.Object3D(open("sample.obj")),
             wandb.Object3D(open("sample.gltf")),
             wandb.Object3D(open("sample.glb")),
         ]
    }
)

Raises:

  • wandb.Error: if called before wandb.init
  • ValueError: if invalid data is passed

Examples:

# Basic usage
import wandb

run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
# Incremental logging
import wandb

run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
# Histogram
import numpy as np
import wandb

# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
# Image from numpy
import numpy as np
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
    image = wandb.Image(pixels, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})
# Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(
         low=0, high=256, size=(100, 100, 3), dtype=np.uint8
    )
    pil_image = PILImage.fromarray(pixels, mode="RGB")
    image = wandb.Image(pil_image, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})
# Video from numpy
import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
    low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
)
run.log({"video": wandb.Video(frames, fps=4)})
# Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb

run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y)  # plot y = x^2
run.log({"chart": fig})
# PR Curve
import wandb

run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
# 3D Object
import wandb

run = wandb.init()
run.log(
    {
         "generated_samples": [
             wandb.Object3D(open("sample.obj")),
             wandb.Object3D(open("sample.gltf")),
             wandb.Object3D(open("sample.glb")),
         ]
    }
)

For more and more detailed examples, see our guides to logging.


method Run.log_artifact

log_artifact(
    artifact_or_path: 'Artifact | StrPath',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    tags: 'list[str] | None' = None
)  Artifact

Declare an artifact as an output of a run.

Args:

  • artifact_or_path: A path to the contents of this artifact, can be in the following forms
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • name: An artifact name. Defaults to the basename of the path prepended with the current run id if not specified. Valid names can be in the following forms:
    • name:version
    • name:alias
    • digest
  • type: The type of artifact to log. Common examples include dataset and model
  • aliases: Aliases to apply to this artifact, defaults to ["latest"]
  • tags: Tags to apply to this artifact, if any.

Returns: An Artifact object.


method Run.log_code

log_code(
    root: 'str | None' = '.',
    name: 'str | None' = None,
    include_fn: 'Callable[[str, str], bool] | Callable[[str], bool]' = <function _is_py_requirements_or_dockerfile at 0x101b8a290>,
    exclude_fn: 'Callable[[str, str], bool] | Callable[[str], bool]' = <function exclude_wandb_fn at 0x1039e3760>
)  Artifact | None

Save the current state of your code to a W&B Artifact.

By default, it walks the current directory and logs all files that end with .py.

Args:

  • root: The relative (to os.getcwd()) or absolute path to recursively find code from.
  • name: The name of our code artifact. By default, we’ll name the artifact source-$PROJECT_ID-$ENTRYPOINT_RELPATH. There may be scenarios where you want many runs to share the same artifact. Specifying name allows you to achieve that.
  • include_fn: A callable that accepts a file path and (optionally) root path and returns True when it should be included and False otherwise. This
  • defaults to lambda path, root: path.endswith(".py").
  • exclude_fn: A callable that accepts a file path and (optionally) root path and returns True when it should be excluded and False otherwise. This defaults to a function that excludes all files within <root>/.wandb/ and <root>/wandb/ directories.

Examples: Basic usage

import wandb

with wandb.init() as run:
    run.log_code()

Advanced usage

import wandb

with wandb.init() as run:
    run.log_code(
         root="../",
         include_fn=lambda path: path.endswith(".py") or path.endswith(".ipynb"),
         exclude_fn=lambda path, root: os.path.relpath(path, root).startswith(
             "cache/"
         ),
    )

Returns: An Artifact object if code was logged


method Run.log_model

log_model(
    path: 'StrPath',
    name: 'str | None' = None,
    aliases: 'list[str] | None' = None
)  None

Logs a model artifact as an output of this run.

The name of model artifact can only contain alphanumeric characters, underscores, and hyphens.

Args:

  • path: A path to the contents of this model, can be in the following forms
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • name: A name to assign to the model artifact that the file contents will be added to. The string must contain only alphanumeric characters such as dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.
  • aliases: Aliases to apply to the created model artifact, defaults to ["latest"]

Returns: None

Raises:

  • ValueError: if name has invalid special characters.

Examples:

run.log_model(
   path="/local/directory",
   name="my_model_artifact",
   aliases=["production"],
)

Invalid usage

run.log_model(
    path="/local/directory",
    name="my_entity/my_project/my_model_artifact",
    aliases=["production"],
)

method Run.mark_preempting

mark_preempting()  None

Mark this run as preempting.

Also tells the internal process to immediately report this to server.


method Run.project_name

project_name()  str

This method is deprecated and will be removed in a future release. Use run.project instead.

Name of the W&B project associated with the run.


method Run.restore

restore(
    name: 'str',
    run_path: 'str | None' = None,
    replace: 'bool' = False,
    root: 'str | None' = None
)  None | TextIO

Download the specified file from cloud storage.

File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.

Args:

  • name: The name of the file.
  • run_path: Optional path to a run to pull files from, i.e. username/project_name/run_id if wandb.init has not been called, this is required.
  • replace: Whether to download the file even if it already exists locally
  • root: The directory to download the file to. Defaults to the current directory or the run directory if wandb.init was called.

Returns: None if it can’t find the file, otherwise a file object open for reading.

Raises:

  • wandb.CommError: If W&B can’t connect to the W&B backend.
  • ValueError: If the file is not found or can’t find run_path.

method Run.save

save(
    glob_str: 'str | os.PathLike',
    base_path: 'str | os.PathLike | None' = None,
    policy: 'PolicyName' = 'live'
)  bool | list[str]

Sync one or more files to W&B.

Relative paths are relative to the current working directory.

A Unix glob, such as “myfiles/*”, is expanded at the time save is called regardless of the policy. In particular, new files are not picked up automatically.

A base_path may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str, and the directory structure beneath it is preserved.

When given an absolute path or glob and no base_path, one directory level is preserved as in the example above.

Args:

  • glob_str: A relative or absolute path or Unix glob.
  • base_path: A path to use to infer a directory structure; see examples.
  • policy: One of live, now, or end.
    • live: upload the file as it changes, overwriting the previous version
    • now: upload the file once now
    • end: upload file when the run ends

Returns: Paths to the symlinks created for the matched files.

For historical reasons, this may return a boolean in legacy code.

import wandb

wandb.init()

wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.

wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.

wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.

wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.

wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
#    of "files/".

method Run.status

status()  RunStatus

Get sync info from the internal backend, about the current run’s sync status.


method Run.to_html

to_html(height: 'int' = 420, hidden: 'bool' = False)  str

Generate HTML containing an iframe displaying the current run.


method Run.unwatch

unwatch(
    models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
)  None

Remove pytorch model topology, gradient and parameter hooks.

Args:

  • models: Optional list of pytorch models that have had watch called on them.

method Run.upsert_artifact

upsert_artifact(
    artifact_or_path: 'Artifact | str',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    distributed_id: 'str | None' = None
)  Artifact

Declare (or append to) a non-finalized artifact as output of a run.

Note that you must call run.finish_artifact() to finalize the artifact. This is useful when distributed jobs need to all contribute to the same artifact.

Args:

  • artifact_or_path: A path to the contents of this artifact, can be in the following forms:
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • name: An artifact name. May be prefixed with “entity/project”. Defaults to the basename of the path prepended with the current run ID if not specified. Valid names can be in the following forms:
    • name:version
    • name:alias
    • digest
  • type: The type of artifact to log. Common examples include dataset, model.
  • aliases: Aliases to apply to this artifact, defaults to ["latest"].
  • distributed_id: Unique string that all distributed jobs share. If None, defaults to the run’s group name.

Returns: An Artifact object.


method Run.use_artifact

use_artifact(
    artifact_or_name: 'str | Artifact',
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    use_as: 'str | None' = None
)  Artifact

Declare an artifact as an input to a run.

Call download or file on the returned object to get the contents locally.

Args:

  • artifact_or_name: The name of the artifact to use. May be prefixed with the name of the project the artifact was logged to ("" or “/”). If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms
    • name:version
    • name:alias
  • type: The type of artifact to use.
  • aliases: Aliases to apply to this artifact
  • use_as: This argument is deprecated and does nothing.

Returns: An Artifact object.

Examples:

import wandb

run = wandb.init(project="<example>")

# Use an artifact by name and alias
artifact_a = run.use_artifact(artifact_or_name="<name>:<alias>")

# Use an artifact by name and version
artifact_b = run.use_artifact(artifact_or_name="<name>:v<version>")

# Use an artifact by entity/project/name:alias
artifact_c = run.use_artifact(
   artifact_or_name="<entity>/<project>/<name>:<alias>"
)

# Use an artifact by entity/project/name:version
artifact_d = run.use_artifact(
   artifact_or_name="<entity>/<project>/<name>:v<version>"
)

method Run.use_model

use_model(name: 'str')  FilePathStr

Download the files logged in a model artifact name.

Args:

  • name: A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms
    • model_artifact_name:version
    • model_artifact_name:alias

Raises:

  • AssertionError: if model artifact name is of a type that does not contain the substring ‘model’.

Returns:

  • path: path to downloaded model artifact file(s).

Examples:

run.use_model(
   name="my_model_artifact:latest",
)

run.use_model(
   name="my_project/my_model_artifact:v0",
)

run.use_model(
   name="my_entity/my_project/my_model_artifact:<digest>",
)

Invalid usage

run.use_model(
    name="my_entity/my_project/my_model_artifact",
)

method Run.watch

watch(
    models: 'torch.nn.Module | Sequence[torch.nn.Module]',
    criterion: 'torch.F | None' = None,
    log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
    log_freq: 'int' = 1000,
    idx: 'int | None' = None,
    log_graph: 'bool' = False
)  None

Hook into given PyTorch model to monitor gradients and the model’s computational graph.

This function can track parameters, gradients, or both during training.

Args:

  • models: A single model or a sequence of models to be monitored.
  • criterion: The loss function being optimized (optional).
  • log: Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”).
  • log_freq: Frequency (in batches) to log gradients and parameters. (default=1000)
  • idx: Index used when tracking multiple models with wandb.watch. (default=None)
  • log_graph: Whether to log the model’s computational graph. (default=False)

Raises: ValueError: If wandb.init has not been called or if any of the models are not instances of torch.nn.Module.

5.4.1.1.5 - Settings

class Settings

Settings for the W&B SDK.

This class manages configuration settings for the W&B SDK, ensuring type safety and validation of all settings. Settings are accessible as attributes and can be initialized programmatically, through environment variables (WANDB_ prefix), and with configuration files.

The settings are organized into three categories: 1. Public settings: Core configuration options that users can safely modify to customize W&B’s behavior for their specific needs. 2. Internal settings: Settings prefixed with ‘x_’ that handle low-level SDK behavior. These settings are primarily for internal use and debugging. While they can be modified, they are not considered part of the public API and may change without notice in future versions. 3. Computed settings: Read-only settings that are automatically derived from other settings or the environment.

Args:

  • allow_offline_artifacts (bool): Flag to allow table artifacts to be synced in offline mode.
  • allow_val_change (bool): Flag to allow modification of Config values after they’ve been set.
  • anonymous (Optional[Literal[“allow”, “must”, “never”]]): Controls anonymous data logging. Possible values are:
    • “never”: requires you to link your W&B account before tracking the run, so you don’t accidentally create an anonymous run.
    • “allow”: lets a logged-in user track runs with their account, but lets someone who is running the script without a W&B account see the charts in the UI.
    • “must”: sends the run to an anonymous account instead of to a signed-up user account.
  • api_key (Optional[str]): The W&B API key.
  • azure_account_url_to_access_key (Optional[Dict[str, str]]): Mapping of Azure account URLs to their corresponding access keys for Azure integration.
  • base_url (str): The URL of the W&B backend for data synchronization.
  • code_dir (Optional[str]): Directory containing the code to be tracked by W&B.
  • config_paths (Optional[Sequence[str]]): Paths to files to load configuration from into the Config object.
  • console (Literal[“auto”, “off”, “wrap”, “redirect”, “wrap_raw”, “wrap_emu”]): The type of console capture to be applied. Possible values are:
    • “auto” - Automatically selects the console capture method based on the system environment and settings.
    • “off” - Disables console capture.
    • “redirect” - Redirects low-level file descriptors for capturing output.
    • “wrap” - Overrides the write methods of sys.stdout/sys.stderr. Will be mapped to either “wrap_raw” or “wrap_emu” based on the state of the system.
    • “wrap_raw” - Same as “wrap” but captures raw output directly instead of through an emulator. Derived from the wrap setting and should not be set manually.
    • “wrap_emu” - Same as “wrap” but captures output through an emulator. Derived from the wrap setting and should not be set manually.
  • console_multipart (bool): Whether to produce multipart console log files.
  • credentials_file (str): Path to file for writing temporary access tokens.
  • disable_code (bool): Whether to disable capturing the code.
  • disable_git (bool): Whether to disable capturing the git state.
  • disable_job_creation (bool): Whether to disable the creation of a job artifact for W&B Launch.
  • docker (Optional[str]): The Docker image used to execute the script.
  • email (Optional[str]): The email address of the user.
  • entity (Optional[str]): The W&B entity, such as a user or a team.
  • organization (Optional[str]): The W&B organization.
  • force (bool): Whether to pass the force flag to wandb.login().
  • fork_from (Optional[RunMoment]): Specifies a point in a previous execution of a run to fork from. The point is defined by the run ID, a metric, and its value. Only the metric ‘_step’ is supported.
  • git_commit (Optional[str]): The git commit hash to associate with the run.
  • git_remote (str): The git remote to associate with the run.
  • git_remote_url (Optional[str]): The URL of the git remote repository.
  • git_root (Optional[str]): Root directory of the git repository.
  • heartbeat_seconds (int): Interval in seconds between heartbeat signals sent to the W&B servers.
  • host (Optional[str]): Hostname of the machine running the script.
  • http_proxy (Optional[str]): Custom proxy servers for http requests to W&B.
  • https_proxy (Optional[str]): Custom proxy servers for https requests to W&B.
  • identity_token_file (Optional[str]): Path to file containing an identity token (JWT) for authentication.
  • ignore_globs (Sequence[str]): Unix glob patterns relative to files_dir specifying files to exclude from upload.
  • init_timeout (float): Time in seconds to wait for the wandb.init call to complete before timing out.
  • insecure_disable_ssl (bool): Whether to disable SSL verification.
  • job_name (Optional[str]): Name of the Launch job running the script.
  • job_source (Optional[Literal[“repo”, “artifact”, “image”]]): Source type for Launch.
  • label_disable (bool): Whether to disable automatic labeling features.
  • launch (bool): Flag to indicate if the run is being launched through W&B Launch.
  • launch_config_path (Optional[str]): Path to the launch configuration file.
  • login_timeout (Optional[float]): Time in seconds to wait for login operations before timing out.
  • mode (Literal[“online”, “offline”, “dryrun”, “disabled”, “run”, “shared”]): The operating mode for W&B logging and synchronization.
  • notebook_name (Optional[str]): Name of the notebook if running in a Jupyter-like environment.
  • program (Optional[str]): Path to the script that created the run, if available.
  • program_abspath (Optional[str]): The absolute path from the root repository directory to the script that created the run. Root repository directory is defined as the directory containing the .git directory, if it exists. Otherwise, it’s the current working directory.
  • program_relpath (Optional[str]): The relative path to the script that created the run.
  • project (Optional[str]): The W&B project ID.
  • quiet (bool): Flag to suppress non-essential output.
  • reinit (Union[Literal[“default”, “return_previous”, “finish_previous”, “create_new”], bool]): What to do when wandb.init() is called while a run is active. Options are
    • “default”: Use “finish_previous” in notebooks and “return_previous” otherwise.
    • “return_previous”: Return the most recently created run that is not yet finished. This does not update wandb.run; see the “create_new” option.
    • “finish_previous”: Finish all active runs, then return a new run.
    • “create_new”: Create a new run without modifying other active runs. Does not update wandb.run and top-level functions like wandb.log. Because of this, some older integrations that rely on the global run will not work.
  • relogin (bool): Whether to force a new login attempt.
  • resume (Optional[Literal[“allow”, “must”, “never”, “auto”]]): Specifies the resume behavior for the run. The available options are
    • “must”: Resumes from an existing run with the same ID. If no such run exists, it will result in failure.
    • “allow”: Attempts to resume from an existing run with the same ID. If none is found, a new run will be created.
    • “never”: Always starts a new run. If a run with the same ID already exists, it will result in failure.
    • “auto”: Automatically resumes from the most recent failed run on the same machine.
  • resume_from (Optional[RunMoment]): Specifies a point in a previous execution of a run to resume from. The point is defined by the run ID, a metric, and its value. Currently, only the metric ‘_step’ is supported.
  • resumed (bool): Indication from the server about the state of the run. This is different from resume, a user provided flag.
  • root_dir (str): The root directory to use as the base for all run-related paths. Used to derive the wandb directory and the run directory.
  • run_group (Optional[str]): Group identifier for related runs. Used for grouping runs in the UI.
  • run_id (Optional[str]): The ID of the run.
  • run_job_type (Optional[str]): Type of job being run (e.g., training, evaluation).
  • run_name (Optional[str]): Human-readable name for the run.
  • run_notes (Optional[str]): Additional notes or description for the run.
  • run_tags (Optional[Tuple[str, …]]): Tags to associate with the run for organization and filtering.
  • sagemaker_disable (bool): Flag to disable SageMaker-specific functionality.
  • save_code (Optional[bool]): Whether to save the code associated with the run.
  • settings_system (Optional[str]): Path to the system-wide settings file.
  • show_colors (Optional[bool]): Whether to use colored output in the console.
  • show_emoji (Optional[bool]): Whether to show emoji in the console output.
  • show_errors (bool): Whether to display error messages.
  • show_info (bool): Whether to display informational messages.
  • show_warnings (bool): Whether to display warning messages.
  • silent (bool): Flag to suppress all output.
  • start_method (Optional[str]): Method to use for starting subprocesses.
  • strict (Optional[bool]): Whether to enable strict mode for validation and error checking.
  • summary_timeout (int): Time in seconds to wait for summary operations before timing out.
  • summary_warnings (int): Maximum number of summary warnings to display.
  • sweep_id (Optional[str]): Identifier of the sweep this run belongs to.
  • sweep_param_path (Optional[str]): Path to the sweep parameters configuration.
  • symlink (bool): Whether to use symlinks for run directories.
  • sync_tensorboard (Optional[bool]): Whether to synchronize TensorBoard logs with W&B.
  • table_raise_on_max_row_limit_exceeded (bool): Whether to raise an exception when table row limits are exceeded.
  • username (Optional[str]): Username of the user.

property Settings.colab_url

The URL to the Colab notebook, if running in Colab.


property Settings.deployment


property Settings.files_dir

Absolute path to the local directory where the run’s files are stored.


property Settings.is_local


property Settings.log_dir

The directory for storing log files.


property Settings.log_internal

The path to the file to use for internal logs.


The path to the symlink to the internal log file of the most recent run.


The path to the symlink to the user-process log file of the most recent run.


property Settings.log_user

The path to the file to use for user-process logs.


property Settings.model_extra

Get extra fields set during validation.

Returns: A dictionary of extra fields, or None if config.extra is not set to "allow".


property Settings.model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.


property Settings.project_url

The W&B URL where the project can be viewed.


property Settings.resume_fname

The path to the resume file.


property Settings.run_mode

The mode of the run. Can be either “run” or “offline-run”.


property Settings.run_url

The W&B URL where the run can be viewed.


property Settings.settings_workspace

The path to the workspace settings file.


property Settings.sweep_url

The W&B URL where the sweep can be viewed.


property Settings.sync_dir

The directory for storing the run’s files.


property Settings.sync_file

Path to the append-only binary transaction log file.


Path to the symlink to the most recent run’s transaction log file.


property Settings.timespec

The time specification for the run.


property Settings.wandb_dir

Full path to the wandb directory.


classmethod Settings.catch_private_settings

catch_private_settings(values)

Check if a private field is provided and assign to the corresponding public one.

This is a compatibility layer to handle previous versions of the settings.


method Settings.update_from_dict

update_from_dict(settings: 'Dict[str, Any]')  None

Update settings from a dictionary.


5.4.1.2 - Functions

5.4.1.2.1 - agent()

function agent

agent(
    sweep_id: str,
    function: Optional[Callable] = None,
    entity: Optional[str] = None,
    project: Optional[str] = None,
    count: Optional[int] = None
)  None

Start one or more sweep agents.

The sweep agent uses the sweep_id to know which sweep it is a part of, what function to execute, and (optionally) how many agents to run.

Args:

  • sweep_id: The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK.
  • function: A function to call instead of the “program” specified in the sweep config.
  • entity: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
  • project: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled “Uncategorized”.
  • count: The number of sweep config trials to try.

5.4.1.2.2 - controller()

function controller

controller(
    sweep_id_or_config: Optional[str, Dict] = None,
    entity: Optional[str] = None,
    project: Optional[str] = None
)  _WandbController

Public sweep controller constructor.

Examples:

import wandb

tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)

5.4.1.2.3 - finish()

function finish

finish(exit_code: 'int | None' = None, quiet: 'bool | None' = None)  None

Finish a run and upload any remaining data.

Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.

Run States:

  • Running: Active run that is logging data and/or sending heartbeats.
  • Crashed: Run that stopped sending heartbeats unexpectedly.
  • Finished: Run completed successfully (exit_code=0) with all data synced.
  • Failed: Run completed with errors (exit_code!=0).

Args:

  • exit_code: Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.
  • quiet: Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).

5.4.1.2.4 - init()

function init

init(
    entity: 'str | None' = None,
    project: 'str | None' = None,
    dir: 'StrPath | None' = None,
    id: 'str | None' = None,
    name: 'str | None' = None,
    notes: 'str | None' = None,
    tags: 'Sequence[str] | None' = None,
    config: 'dict[str, Any] | str | None' = None,
    config_exclude_keys: 'list[str] | None' = None,
    config_include_keys: 'list[str] | None' = None,
    allow_val_change: 'bool | None' = None,
    group: 'str | None' = None,
    job_type: 'str | None' = None,
    mode: "Literal['online', 'offline', 'disabled'] | None" = None,
    force: 'bool | None' = None,
    anonymous: "Literal['never', 'allow', 'must'] | None" = None,
    reinit: "bool | Literal[None, 'default', 'return_previous', 'finish_previous', 'create_new']" = None,
    resume: "bool | Literal['allow', 'never', 'must', 'auto'] | None" = None,
    resume_from: 'str | None' = None,
    fork_from: 'str | None' = None,
    save_code: 'bool | None' = None,
    tensorboard: 'bool | None' = None,
    sync_tensorboard: 'bool | None' = None,
    monitor_gym: 'bool | None' = None,
    settings: 'Settings | dict[str, Any] | None' = None
)  Run

Start a new run to track and log to W&B.

In an ML training pipeline, you could add wandb.init() to the beginning of your training script as well as your evaluation script, and each piece would be tracked as a run in W&B.

wandb.init() spawns a new background process to log data to a run, and it also syncs data to https://wandb.ai by default, so you can see your results in real-time. When you’re done logging data, call wandb.finish() to end the run. If you don’t call run.finish(), the run will end when your script exits.

Run IDs must not contain any of the following special characters / \ # ? % :

Args:

  • entity: The username or team name the runs are logged to. The entity must already exist, so ensure you create your account or team in the UI before starting to log runs. If not specified, the run will default your default entity. To change the default entity, go to your settings and update the “Default location to create new projects” under “Default team”.
  • project: The name of the project under which this run will be logged. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can’t infer the project name, the project will default to "uncategorized".
  • dir: The absolute path to the directory where experiment logs and metadata files are stored. If not specified, this defaults to the ./wandb directory. Note that this does not affect the location where artifacts are stored when calling download().
  • id: A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. For a short descriptive name, use the name field, or for saving hyperparameters to compare across runs, use config.
  • name: A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using the config field.
  • notes: A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future.
  • tags: A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like “baseline” or “production.” You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, use run.tags += ["new_tag"] after calling run = wandb.init().
  • config: Sets wandb.config, a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (.), and values should be smaller than 10 MB. If a dictionary, argparse.Namespace, or absl.flags.FLAGS is provided, the key-value pairs will be loaded directly into wandb.config. If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded into wandb.config.
  • config_exclude_keys: A list of specific keys to exclude from wandb.config.
  • config_include_keys: A list of specific keys to include in wandb.config.
  • allow_val_change: Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider using wandb.log() instead. By default, this is False in scripts and True in Notebook environments.
  • group: Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment.
  • job_type: Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as “train” and “eval”. Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons.
  • mode: Specifies how run data is managed, with the following options:
    • "online" (default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations.
    • "offline": Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing.
    • "disabled": Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations.
  • force: Determines if a W&B login is required to run the script. If True, the user must be logged in to W&B; otherwise, the script will not proceed. If False (default), the script can proceed without a login, switching to offline mode if the user is not logged in.
  • anonymous: Specifies the level of control over anonymous data logging. Available options are:
    • "never" (default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account.
    • "allow": Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI.
    • "must": Forces the run to be logged to an anonymous account, even if the user is logged in.
  • reinit: Shorthand for the “reinit” setting. Determines the behavior of wandb.init() when a run is active.
  • resume: Controls the behavior when resuming a run with the specified id. Available options are:
    • "allow": If a run with the specified id exists, it will resume from the last step; otherwise, a new run will be created.
    • "never": If a run with the specified id exists, an error will be raised. If no such run is found, a new run will be created.
    • "must": If a run with the specified id exists, it will resume from the last step. If no run is found, an error will be raised.
    • "auto": Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run.
    • True: Deprecated. Use "auto" instead.
    • False: Deprecated. Use the default behavior (leaving resume unset) to always start a new run. If resume is set, fork_from and resume_from cannot be used. When resume is unset, the system will always start a new run.
  • resume_from: Specifies a moment in a previous run to resume a run from, using the format {run_id}?_step={step}. This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If an id argument is also provided, the resume_from argument will take precedence. resume, resume_from and fork_from cannot be used together, only one of them can be used at a time. Note that this feature is in beta and may change in the future.
  • fork_from: Specifies a point in a previous run from which to fork a new run, using the format {id}?_step={step}. This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If an id argument is also provided, it must be different from the fork_from argument, an error will be raised if they are the same. resume, resume_from and fork_from cannot be used together, only one of them can be used at a time. Note that this feature is in beta and may change in the future.
  • save_code: Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to enable on your settings page.
  • tensorboard: Deprecated. Use sync_tensorboard instead.
  • sync_tensorboard: Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI.
  • saving relevant event files for viewing in the W&B UI. (Default: False)
  • monitor_gym: Enables automatic logging of videos of the environment when using OpenAI Gym.
  • settings: Specifies a dictionary or wandb.Settings object with advanced settings for the run.

Raises:

  • Error: if some unknown or internal error happened during the run initialization.
  • AuthenticationError: if the user failed to provide valid credentials.
  • CommError: if there was a problem communicating with the WandB server.
  • UsageError: if the user provided invalid arguments.
  • KeyboardInterrupt: if user interrupts the run.

Returns: A Run object.

Examples: wandb.init() returns a run object, and you can also access the run object with wandb.run:

import wandb

config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
    run.config.update({"architecture": "resnet", "depth": 34})

    # ... your training code here ...

5.4.1.2.5 - login()

function login

login(
    anonymous: Optional[Literal['must', 'allow', 'never']] = None,
    key: Optional[str] = None,
    relogin: Optional[bool] = None,
    host: Optional[str] = None,
    force: Optional[bool] = None,
    timeout: Optional[int] = None,
    verify: bool = False,
    referrer: Optional[str] = None
)  bool

Set up W&B login credentials.

By default, this will only store credentials locally without verifying them with the W&B server. To verify credentials, pass verify=True.

Args:

  • anonymous: Set to “must”, “allow”, or “never”. If set to “must”, always log a user in anonymously. If set to “allow”, only create an anonymous user if the user isn’t already logged in. If set to “never”, never log a user anonymously. Default set to “never”.
  • key: The API key to use.
  • relogin: If true, will re-prompt for API key.
  • host: The host to connect to.
  • force: If true, will force a relogin.
  • timeout: Number of seconds to wait for user input.
  • verify: Verify the credentials with the W&B server.
  • referrer: The referrer to use in the URL login request.

Returns:

  • bool: If key is configured

Raises:

  • AuthenticationError: If api_key fails verification with the server.
  • UsageError: If api_key cannot be configured and no tty.

5.4.1.2.6 - restore()

function restore

restore(
    name: 'str',
    run_path: 'str | None' = None,
    replace: 'bool' = False,
    root: 'str | None' = None
)  None | TextIO

Download the specified file from cloud storage.

File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.

Args:

  • name: The name of the file.
  • run_path: Optional path to a run to pull files from, i.e. username/project_name/run_id if wandb.init has not been called, this is required.
  • replace: Whether to download the file even if it already exists locally
  • root: The directory to download the file to. Defaults to the current directory or the run directory if wandb.init was called.

Returns: None if it can’t find the file, otherwise a file object open for reading.

Raises:

  • wandb.CommError: If W&B can’t connect to the W&B backend.
  • ValueError: If the file is not found or can’t find run_path.

5.4.1.2.7 - setup()

function setup

setup(settings: 'Settings | None' = None)  _WandbSetup

Prepares W&B for use in the current process and its children.

You can usually ignore this as it is implicitly called by wandb.init().

When using wandb in multiple processes, calling wandb.setup() in the parent process before starting child processes may improve performance and resource utilization.

Note that wandb.setup() modifies os.environ, and it is important that child processes inherit the modified environment variables.

See also wandb.teardown().

Args:

  • settings: Configuration settings to apply globally. These can be overridden by subsequent wandb.init() calls.

Example:

import multiprocessing

import wandb


def run_experiment(params):
   with wandb.init(config=params):
        # Run experiment
        pass


if __name__ == "__main__":
   # Start backend and set global config
   wandb.setup(settings={"project": "my_project"})

   # Define experiment parameters
   experiment_params = [
        {"learning_rate": 0.01, "epochs": 10},
        {"learning_rate": 0.001, "epochs": 20},
   ]

   # Start multiple processes, each running a separate experiment
   processes = []
   for params in experiment_params:
        p = multiprocessing.Process(target=run_experiment, args=(params,))
        p.start()
        processes.append(p)

   # Wait for all processes to complete
   for p in processes:
        p.join()

   # Optional: Explicitly shut down the backend
   wandb.teardown()

5.4.1.2.8 - sweep()

function sweep

sweep(
    sweep: Union[dict, Callable],
    entity: Optional[str] = None,
    project: Optional[str] = None,
    prior_runs: Optional[List[str]] = None
)  str

Initialize a hyperparameter sweep.

Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.

Make note the unique identifier, sweep_id, that is returned. At a later step provide the sweep_id to a sweep agent.

See Sweep configuration structure for information on how to define your sweep.

Args:

  • sweep: The configuration of a hyperparameter search. (or configuration generator). If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec.
  • entity: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
  • project: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.
  • prior_runs: The run IDs of existing runs to add to this sweep.

Returns:

  • sweep_id: str. A unique identifier for the sweep.

5.4.1.2.9 - teardown()

function teardown

teardown(exit_code: 'int | None' = None)  None

Waits for W&B to finish and frees resources.

Completes any runs that were not explicitly finished using run.finish() and waits for all data to be uploaded.

It is recommended to call this at the end of a session that used wandb.setup(). It is invoked automatically in an atexit hook, but this is not reliable in certain setups such as when using Python’s multiprocessing module.

5.4.1.3 - Legacy Functions

5.4.1.3.1 - define_metric()

function wandb.define_metric

wandb.define_metric(
    name: 'str',
    step_metric: 'str | wandb_metric.Metric | None' = None,
    step_sync: 'bool | None' = None,
    hidden: 'bool | None' = None,
    summary: 'str | None' = None,
    goal: 'str | None' = None,
    overwrite: 'bool | None' = None
)  wandb_metric.Metric

Customize metrics logged with wandb.log().

Args:

  • name: The name of the metric to customize.
  • step_metric: The name of another metric to serve as the X-axis for this metric in automatically generated charts.
  • step_sync: Automatically insert the last value of step_metric into run.log() if it is not provided explicitly. Defaults to True if step_metric is specified.
  • hidden: Hide this metric from automatic plots.
  • summary: Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.
  • goal: Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.
  • overwrite: If false, then this call is merged with previous define_metric calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.

Returns: An object that represents this call but can otherwise be discarded.

5.4.1.3.2 - link_model()

wandb.link_model(
    path: 'StrPath',
    registered_model_name: 'str',
    name: 'str | None' = None,
    aliases: 'list[str] | None' = None
)  Artifact | None

Log a model artifact version and link it to a registered model in the model registry.

Linked model versions are visible in the UI for the specified registered model.

This method will:

  • Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’ of type ‘model’.
  • Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project. If not, create a new registered model with name ‘registered_model_name’.
  • Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
  • Attach aliases from ‘aliases’ list to the newly linked model artifact version.

Args:

  • path: (str) A path to the contents of this model, can be in the following forms:
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • registered_model_name: The name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run.
  • name: The name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.
  • aliases: Aliases that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.

Raises:

  • AssertionError: If registered_model_name is a path or if model artifact ’name’ is of a type that does not contain the substring ‘model’.
  • ValueError: If name has invalid special characters.

Returns: The linked artifact if linking was successful, otherwise None.

Examples:

run.link_model(
   path="/local/directory",
   registered_model_name="my_reg_model",
   name="my_model_artifact",
   aliases=["production"],
)

Invalid usage

run.link_model(
    path="/local/directory",
    registered_model_name="my_entity/my_project/my_reg_model",
    name="my_model_artifact",
    aliases=["production"],
)

run.link_model(
    path="/local/directory",
    registered_model_name="my_reg_model",
    name="my_entity/my_project/my_model_artifact",
    aliases=["production"],
)

5.4.1.3.3 - log_artifact()

function wandb.log_artifact

wandb.log_artifact(
    artifact_or_path: 'Artifact | StrPath',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    tags: 'list[str] | None' = None
)  Artifact

Declare an artifact as an output of a run.

Args:

  • artifact_or_path: A path to the contents of this artifact, can be in the following forms
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • name: An artifact name. Defaults to the basename of the path prepended with the current run id if not specified. Valid names can be in the following forms:
    • name:version
    • name:alias
    • digest
  • type: The type of artifact to log. Common examples include dataset and model
  • aliases: Aliases to apply to this artifact, defaults to ["latest"]
  • tags: Tags to apply to this artifact, if any.

Returns: An Artifact object.

5.4.1.3.4 - log_model()

function wandb.log_model

wandb.log_model(
    path: 'StrPath',
    name: 'str | None' = None,
    aliases: 'list[str] | None' = None
)  None

Logs a model artifact as an output of this run.

The name of model artifact can only contain alphanumeric characters, underscores, and hyphens.

Args:

  • path: A path to the contents of this model, can be in the following forms
    • /local/directory
    • /local/directory/file.txt
    • s3://bucket/path
  • name: A name to assign to the model artifact that the file contents will be added to. The string must contain only alphanumeric characters such as dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.
  • aliases: Aliases to apply to the created model artifact, defaults to ["latest"]

Returns: None

Raises:

  • ValueError: if name has invalid special characters.

Examples:

run.log_model(
   path="/local/directory",
   name="my_model_artifact",
   aliases=["production"],
)

Invalid usage

run.log_model(
    path="/local/directory",
    name="my_entity/my_project/my_model_artifact",
    aliases=["production"],
)

5.4.1.3.5 - log()

function wandb.log

wandb.log(
    data: 'dict[str, Any]',
    step: 'int | None' = None,
    commit: 'bool | None' = None
)  None

Upload run data.

Use log to log data from runs, such as scalars, images, video, histograms, plots, and tables. See Log objects and media for code snippets, best practices, and more.

Basic usage:

import wandb

with wandb.init() as run:
     run.log({"train-loss": 0.5, "accuracy": 0.9})

The previous code snippet saves the loss and accuracy to the run’s history and updates the summary values for these metrics.

Visualize logged data in a workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, such as in a Jupyter notebook, with the Public API.

Logged values don’t have to be scalars. You can log any W&B supported Data Type such as images, audio, video, and more. For example, you can use wandb.Table to log structured data. See Log tables, visualize and query data tutorial for more details.

W&B organizes metrics with a forward slash (/) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:

run.log(
     {
         "train/accuracy": 0.9,
         "train/loss": 30,
         "validate/accuracy": 0.8,
         "validate/loss": 20,
     }
)

Only one level of nesting is supported; run.log({"a/b/c": 1}) produces a section named “a/b”.

run.log is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.

By default, each call to log creates a new “step”. The step must always increase, and it is not possible to log to a previous step. You can use any metric as the X axis in charts. See Custom log axes for more details.

In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.

# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})

It is possible to use multiple log invocations to log to the same step with the step and commit parameters. The following are all equivalent:

# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})

# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})

# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)

Args:

  • data: A dict with str keys and values that are serializable
  • Python objects including: int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.
  • step: The step number to log. If None, then an implicit auto-incrementing step is used. See the notes in the description.
  • commit: If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None, then the default is commit=True; otherwise, the default is commit=False.
  • sync: This argument is deprecated and does nothing.

Examples: For more and more detailed examples, see our guides to logging.

Basic usage

import wandb

run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})

Incremental logging

import wandb

run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})

Histogram

import numpy as np
import wandb

# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})

Image from NumPy

import numpy as np
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
    image = wandb.Image(pixels, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Image from PIL

import numpy as np
from PIL import Image as PILImage
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(
         low=0,
         high=256,
         size=(100, 100, 3),
         dtype=np.uint8,
    )
    pil_image = PILImage.fromarray(pixels, mode="RGB")
    image = wandb.Image(pil_image, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})

Video from NumPy

import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
    low=0,
    high=256,
    size=(10, 3, 100, 100),
    dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})

Matplotlib plot

from matplotlib import pyplot as plt
import numpy as np
import wandb

run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y)  # plot y = x^2
run.log({"chart": fig})

PR Curve

import wandb

run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})

3D Object

import wandb

run = wandb.init()
run.log(
    {
         "generated_samples": [
             wandb.Object3D(open("sample.obj")),
             wandb.Object3D(open("sample.gltf")),
             wandb.Object3D(open("sample.glb")),
         ]
    }
)

Raises:

  • wandb.Error: if called before wandb.init
  • ValueError: if invalid data is passed

Examples:

# Basic usage
import wandb

run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
# Incremental logging
import wandb

run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
# Histogram
import numpy as np
import wandb

# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
# Image from numpy
import numpy as np
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
    image = wandb.Image(pixels, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})
# Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb

run = wandb.init()
examples = []
for i in range(3):
    pixels = np.random.randint(
         low=0, high=256, size=(100, 100, 3), dtype=np.uint8
    )
    pil_image = PILImage.fromarray(pixels, mode="RGB")
    image = wandb.Image(pil_image, caption=f"random field {i}")
    examples.append(image)
run.log({"examples": examples})
# Video from numpy
import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
    low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
)
run.log({"video": wandb.Video(frames, fps=4)})
# Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb

run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y)  # plot y = x^2
run.log({"chart": fig})
# PR Curve
import wandb

run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
# 3D Object
import wandb

run = wandb.init()
run.log(
    {
         "generated_samples": [
             wandb.Object3D(open("sample.obj")),
             wandb.Object3D(open("sample.gltf")),
             wandb.Object3D(open("sample.glb")),
         ]
    }
)

For more and more detailed examples, see our guides to logging.

5.4.1.3.6 - save()

function wandb.save

wandb.save(
    glob_str: 'str | os.PathLike',
    base_path: 'str | os.PathLike | None' = None,
    policy: 'PolicyName' = 'live'
)  bool | list[str]

Sync one or more files to W&B.

Relative paths are relative to the current working directory.

A Unix glob, such as “myfiles/*”, is expanded at the time save is called regardless of the policy. In particular, new files are not picked up automatically.

A base_path may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str, and the directory structure beneath it is preserved.

When given an absolute path or glob and no base_path, one directory level is preserved as in the example above.

Args:

  • glob_str: A relative or absolute path or Unix glob.
  • base_path: A path to use to infer a directory structure; see examples.
  • policy: One of live, now, or end.
    • live: upload the file as it changes, overwriting the previous version
    • now: upload the file once now
    • end: upload file when the run ends

Returns: Paths to the symlinks created for the matched files.

For historical reasons, this may return a boolean in legacy code.

import wandb

wandb.init()

wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.

wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.

wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.

wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.

wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
#    of "files/".

5.4.1.3.7 - unwatch()

function wandb.unwatch

wandb.unwatch(
    models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
)  None

Remove pytorch model topology, gradient and parameter hooks.

Args:

  • models: Optional list of pytorch models that have had watch called on them.

5.4.1.3.8 - use_artifact()

function wandb.use_artifact

wandb.use_artifact(
    artifact_or_name: 'str | Artifact',
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    use_as: 'str | None' = None
)  Artifact

Declare an artifact as an input to a run.

Call download or file on the returned object to get the contents locally.

Args:

  • artifact_or_name: The name of the artifact to use. May be prefixed with the name of the project the artifact was logged to ("" or “/”). If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms
    • name:version
    • name:alias
  • type: The type of artifact to use.
  • aliases: Aliases to apply to this artifact
  • use_as: This argument is deprecated and does nothing.

Returns: An Artifact object.

Examples:

import wandb

run = wandb.init(project="<example>")

# Use an artifact by name and alias
artifact_a = run.use_artifact(artifact_or_name="<name>:<alias>")

# Use an artifact by name and version
artifact_b = run.use_artifact(artifact_or_name="<name>:v<version>")

# Use an artifact by entity/project/name:alias
artifact_c = run.use_artifact(
   artifact_or_name="<entity>/<project>/<name>:<alias>"
)

# Use an artifact by entity/project/name:version
artifact_d = run.use_artifact(
   artifact_or_name="<entity>/<project>/<name>:v<version>"
)

5.4.1.3.9 - use_model()

function wandb.use_model

wandb.use_model(name: 'str')  FilePathStr

Download the files logged in a model artifact name.

Args:

  • name: A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms
    • model_artifact_name:version
    • model_artifact_name:alias

Raises:

  • AssertionError: if model artifact name is of a type that does not contain the substring ‘model’.

Returns:

  • path: path to downloaded model artifact file(s).

Examples:

run.use_model(
   name="my_model_artifact:latest",
)

run.use_model(
   name="my_project/my_model_artifact:v0",
)

run.use_model(
   name="my_entity/my_project/my_model_artifact:<digest>",
)

Invalid usage

run.use_model(
    name="my_entity/my_project/my_model_artifact",
)

5.4.1.3.10 - watch()

function wandb.watch

wandb.watch(
    models: 'torch.nn.Module | Sequence[torch.nn.Module]',
    criterion: 'torch.F | None' = None,
    log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
    log_freq: 'int' = 1000,
    idx: 'int | None' = None,
    log_graph: 'bool' = False
)  None

Hook into given PyTorch model to monitor gradients and the model’s computational graph.

This function can track parameters, gradients, or both during training.

Args:

  • models: A single model or a sequence of models to be monitored.
  • criterion: The loss function being optimized (optional).
  • log: Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”).
  • log_freq: Frequency (in batches) to log gradients and parameters. (default=1000)
  • idx: Index used when tracking multiple models with wandb.watch. (default=None)
  • log_graph: Whether to log the model’s computational graph. (default=False)

Raises: ValueError: If wandb.init has not been called or if any of the models are not instances of torch.nn.Module.

5.4.2 - Data Types

Defines Data Types for logging interactive visualizations to W&B.

5.4.2.1 - Audio

class Audio

W&B class for audio clips.

Attributes:

  • data_or_path (string or numpy array): A path to an audio file or a numpy array of audio data.
  • sample_rate (int): Sample rate, required when passing in raw numpy array of audio data.
  • caption (string): Caption to display with audio.

method Audio.__init__

__init__(
    data_or_path: Union[str, pathlib.Path, list, ForwardRef('np.ndarray')],
    sample_rate: Optional[int] = None,
    caption: Optional[str] = None
)

Accept a path to an audio file or a numpy array of audio data.


5.4.2.2 - box3d()

function box3d

box3d(
    center: 'npt.ArrayLike',
    size: 'npt.ArrayLike',
    orientation: 'npt.ArrayLike',
    color: 'RGBColor',
    label: 'Optional[str]' = None,
    score: 'Optional[numeric]' = None
)  Box3D

Returns a Box3D.

Args:

  • center: The center point of the box as a length-3 ndarray.
  • size: The box’s X, Y and Z dimensions as a length-3 ndarray.
  • orientation: The rotation transforming global XYZ coordinates into the box’s local XYZ coordinates, given as a length-4 ndarray [r, x, y, z] corresponding to the non-zero quaternion r + xi + yj + zk.
  • color: The box’s color as an (r, g, b) tuple with 0 <= r,g,b <= 1.
  • label: An optional label for the box.
  • score: An optional score for the box.

5.4.2.3 - Html

class Html

W&B class for logging HTML content to W&B.

Args:

  • data: HTML to display in wandb
  • inject: Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged.

method Html.__init__

__init__(
    data: Union[str, pathlib.Path, ForwardRef('TextIO')],
    inject: bool = True,
    data_is_not_path: bool = False
)  None

Creates a W&B HTML object.

It can be initialized by providing a path to a file:

with wandb.init() as run:
     run.log({"html": wandb.Html("./index.html")})

Alternatively, it can be initialized by providing literal HTML, in either a string or IO object:

with wandb.init() as run:
     run.log({"html": wandb.Html("<h1>Hello, world!</h1>")})

Args: data: A string that is a path to a file with the extension “.html”, or a string or IO object containing literal HTML.

  • inject: Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged.
  • data_is_not_path: If set to False, the data will be treated as a path to a file.

5.4.2.4 - Image

class Image

A class for logging images to W&B.

See https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes for more information on modes.

Args:

  • data_or_path: Accepts numpy array of image data, or a PIL image. The class attempts to infer the data format and converts it.
  • mode: The PIL mode for an image. Most common are “L”, “RGB”, “RGBA”.
  • caption: Label for display of image.

When logging a torch.Tensor as a wandb.Image, images are normalized. If you do not want to normalize your images, convert your tensors to a PIL Image.

Examples:

# Create a wandb.Image from a numpy array
import numpy as np
import wandb

with wandb.init() as run:
   examples = []
   for i in range(3):
        pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
        image = wandb.Image(pixels, caption=f"random field {i}")
        examples.append(image)
   run.log({"examples": examples})
# Create a wandb.Image from a PILImage
import numpy as np
from PIL import Image as PILImage
import wandb

with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(
             low=0, high=256, size=(100, 100, 3), dtype=np.uint8
         )
         pil_image = PILImage.fromarray(pixels, mode="RGB")
         image = wandb.Image(pil_image, caption=f"random field {i}")
         examples.append(image)
    run.log({"examples": examples})
# log .jpg rather than .png (default)
import numpy as np
import wandb

with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
         image = wandb.Image(pixels, caption=f"random field {i}", file_type="jpg")
         examples.append(image)
    run.log({"examples": examples})

method Image.__init__

__init__(
    data_or_path: 'ImageDataOrPathType',
    mode: Optional[str] = None,
    caption: Optional[str] = None,
    grouping: Optional[int] = None,
    classes: Optional[ForwardRef('Classes'), Sequence[dict]] = None,
    boxes: Optional[Dict[str, ForwardRef('BoundingBoxes2D')], Dict[str, dict]] = None,
    masks: Optional[Dict[str, ForwardRef('ImageMask')], Dict[str, dict]] = None,
    file_type: Optional[str] = None,
    normalize: bool = True
)  None

Initialize a wandb.Image object.

Args:

  • data_or_path: Accepts numpy array/pytorch tensor of image data, a PIL image object, or a path to an image file.

If a numpy array or pytorch tensor is provided, the image data will be saved to the given file type. If the values are not in the range [0, 255] or all values are in the range [0, 1], the image pixel values will be normalized to the range [0, 255] unless normalize is set to False. - pytorch tensor should be in the format (channel, height, width) - numpy array should be in the format (height, width, channel)

  • mode: The PIL mode for an image. Most common are “L”, “RGB”,
  • "RGBA". Full explanation at https: //pillow.readthedocs.io/en/stable/handbook/concepts.html#modes
  • caption: Label for display of image.
  • grouping: The grouping number for the image.
  • classes: A list of class information for the image, used for labeling bounding boxes, and image masks.
  • boxes: A dictionary containing bounding box information for the image.
  • see: https://docs.wandb.ai/ref/python/data-types/boundingboxes2d/
  • masks: A dictionary containing mask information for the image.
  • see: https://docs.wandb.ai/ref/python/data-types/imagemask/
  • file_type: The file type to save the image as. This parameter has no effect if data_or_path is a path to an image file.
  • normalize: If True, normalize the image pixel values to fall within the range of [0, 255]. Normalize is only applied if data_or_path is a numpy array or pytorch tensor.

Examples:

Create a wandb.Image from a numpy array ```python

import numpy as np
import wandb

with wandb.init() as run:
     examples = []
     for i in range(3):
         pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
         image = wandb.Image(pixels, caption=f"random field {i}")
         examples.append(image)
     run.log({"examples": examples})
``` 

Create a wandb.Image from a PILImage ```python

import numpy as np
from PIL import Image as PILImage
import wandb

with wandb.init() as run:
     examples = []
     for i in range(3):
         pixels = np.random.randint(
             low=0, high=256, size=(100, 100, 3), dtype=np.uint8
         )
         pil_image = PILImage.fromarray(pixels, mode="RGB")
         image = wandb.Image(pil_image, caption=f"random field {i}")
         examples.append(image)
     run.log({"examples": examples})
``` 

log .jpg rather than .png (default) ```python

import numpy as np
import wandb

with wandb.init() as run:
     examples = []
     for i in range(3):
         pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
         image = wandb.Image(
             pixels, caption=f"random field {i}", file_type="jpg"
         )
         examples.append(image)
     run.log({"examples": examples})
``` 

method Image.guess_mode

guess_mode(
    data: Union[ForwardRef('np.ndarray'), ForwardRef('torch.Tensor')],
    file_type: Optional[str] = None
)  str

Guess what type of image the np.array is representing.


5.4.2.5 - Molecule

class Molecule

W&B class for 3D Molecular data.

Args:

  • data_or_path: (pathlib.Path, string, io) Molecule can be initialized from a file name or an io object.
  • caption: (string) Caption associated with the molecule for display.

method Molecule.__init__

__init__(
    data_or_path: Union[str, pathlib.Path, ForwardRef('TextIO')],
    caption: Optional[str] = None,
    **kwargs: str
)  None

5.4.2.6 - Object3D

class Object3D

W&B class for 3D point clouds.

Args:

  • data_or_path: (numpy array, pathlib.Path, string, io) Object3D can be initialized from a file or a numpy array.

Examples: The shape of the numpy array must be one of either

[[x y z],       ...] nx3
[[x y z c],     ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color

method Object3D.__init__

__init__(
    data_or_path: Union[ForwardRef('np.ndarray'), str, pathlib.Path, ForwardRef('TextIO'), dict],
    caption: Optional[str] = None,
    **kwargs: Optional[str, ForwardRef('FileFormat3D')]
)  None

5.4.2.7 - Plotly

class Plotly

W&B class for Plotly plots.

Args:

  • val: Matplotlib or Plotly figure.

method Plotly.__init__

__init__(
    val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
)

classmethod Plotly.get_media_subdir

get_media_subdir()  str

classmethod Plotly.make_plot_media

make_plot_media(
    val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
)  Union[wandb.sdk.data_types.image.Image, ForwardRef('Plotly')]

method Plotly.to_json

to_json(
    run_or_artifact: Union[ForwardRef('LocalRun'), ForwardRef('Artifact')]
)  dict

5.4.2.8 - Table

class Table

The Table class used to display and analyze tabular data.

Unlike traditional spreadsheets, Tables support numerous types of data: scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media. This means you can embed Images, Video, Audio, and other sorts of rich, annotated media directly in Tables, alongside other traditional scalar values.

This class is the primary class used to generate the Table Visualizer in the UI: https://docs.wandb.ai/guides/data-vis/tables.

Attributes:

  • columns (List[str]): Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].
  • data: (List[List[any]]) 2D row-oriented array of values.
  • dataframe (pandas.DataFrame): DataFrame object used to create the table. When set, data and columns arguments are ignored.
  • optional (Union[bool,List[bool]]): Determines if None values are allowed. Default to True. - If a singular bool value, then the optionality is enforced for all columns specified at construction time. - If a list of bool values, then the optionality is applied to each column - should be the same length as columns. applies to all columns. A list of bool values applies to each respective column.
  • allow_mixed_types (bool): Determines if columns are allowed to have mixed types (disables type validation). Defaults to False.

method Table.__init__

__init__(
    columns=None,
    data=None,
    rows=None,
    dataframe=None,
    dtype=None,
    optional=True,
    allow_mixed_types=False,
    log_mode: Optional[Literal['IMMUTABLE', 'MUTABLE', 'INCREMENTAL']] = 'IMMUTABLE'
)

Initializes a Table object.

The rows is available for legacy reasons and should not be used. The Table class uses data to mimic the Pandas API.

Args:

  • columns: (List[str]) Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].
  • data: (List[List[any]]) 2D row-oriented array of values.
  • dataframe: (pandas.DataFrame) DataFrame object used to create the table. When set, data and columns arguments are ignored.
  • optional: (Union[bool,List[bool]]) Determines if None values are allowed. Default to True - If a singular bool value, then the optionality is enforced for all columns specified at construction time - If a list of bool values, then the optionality is applied to each column - should be the same length as columns applies to all columns. A list of bool values applies to each respective column.
  • allow_mixed_types: (bool) Determines if columns are allowed to have mixed types (disables type validation). Defaults to False
  • log_mode: Optional[str] Controls how the Table is logged when mutations occur. Options: - “IMMUTABLE” (default): Table can only be logged once; subsequent logging attempts after the table has been mutated will be no-ops. - “MUTABLE”: Table can be re-logged after mutations, creating a new artifact version each time it’s logged. - “INCREMENTAL”: Table data is logged incrementally, with each log creating a new artifact entry containing the new data since the last log.

method Table.add_column

add_column(name, data, optional=False)

Adds a column of data to the table.

Args:

  • name: (str) - the unique name of the column
  • data: (list | np.array) - a column of homogeneous data
  • optional: (bool) - if null-like values are permitted

method Table.add_computed_columns

add_computed_columns(fn)

Adds one or more computed columns based on existing data.

Args:

  • fn: A function which accepts one or two parameters, ndx (int) and row (dict), which is expected to return a dict representing new columns for that row, keyed by the new column names.

ndx is an integer representing the index of the row. Only included if include_ndx is set to True.

row is a dictionary keyed by existing columns


method Table.add_data

add_data(*data)

Adds a new row of data to the table.

The maximum amount ofrows in a table is determined by wandb.Table.MAX_ARTIFACT_ROWS.

The length of the data should match the length of the table column.


method Table.add_row

add_row(*row)

Deprecated; use add_data instead.


method Table.cast

cast(col_name, dtype, optional=False)

Casts a column to a specific data type.

This can be one of the normal python classes, an internal W&B type, or an example object, like an instance of wandb.Image or wandb.Classes.

Args:

  • col_name (str): The name of the column to cast.
  • dtype (class, wandb.wandb_sdk.interface._dtypes.Type, any): The target dtype.
  • optional (bool): If the column should allow Nones.

method Table.get_column

get_column(name, convert_to=None)

Retrieves a column from the table and optionally converts it to a NumPy object.

Args:

  • name: (str) - the name of the column
  • convert_to: (str, optional) - “numpy”: will convert the underlying data to numpy object

method Table.get_dataframe

get_dataframe()

Returns a pandas.DataFrame of the table.


method Table.get_index

get_index()

Returns an array of row indexes for use in other tables to create links.


5.4.2.9 - Video

class Video

A class for logging videos to W&B.

Args:

  • data_or_path: Video can be initialized with a path to a file or an io object. The format must be “gif”, “mp4”, “webm” or “ogg”. The format must be specified with the format argument. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. Channels should be (time, channel, height, width) or (batch, time, channel, height width)
  • caption: Caption associated with the video for display.
  • fps: The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes.
  • format: Format of video, necessary if initializing with path or io object.

Examples: Log a numpy array as a video

import numpy as np
import wandb

run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8)
run.log({"video": wandb.Video(frames, fps=4)})

method Video.__init__

__init__(
    data_or_path: Union[str, pathlib.Path, ForwardRef('np.ndarray'), ForwardRef('TextIO'), ForwardRef('BytesIO')],
    caption: Optional[str] = None,
    fps: Optional[int] = None,
    format: Optional[Literal['gif', 'mp4', 'webm', 'ogg']] = None
)

Initialize a W&B Video object.

Args: data_or_path: Video can be initialized with a path to a file or an io object. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. The dimensions should be (number of frames, channel, height, width) or (batch, number of frames, channel, height, width) The format parameter must be specified with the format argument when initializing with a numpy array or io object.

  • caption: Caption associated with the video for display. fps: The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes. format: Format of video, necessary if initializing with a numpy array or io object. This parameter will be used to determine the format to use when encoding the video data. Accepted values are “gif”, “mp4”, “webm”, or “ogg”. If no value is provided, the default format will be “gif”.

Examples: Log a numpy array as a video ```python import numpy as np import wandb

with wandb.init() as run: # axes are (number of frames, channel, height, width) frames = np.random.randint( low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8 ) run.log({“video”: wandb.Video(frames, format=“mp4”, fps=4)})





---

5.4.3 - Launch Library Reference

A collection of launch APIs for W&B.

5.4.3.1 - create_and_run_agent()

function create_and_run_agent

create_and_run_agent(
    api: wandb.apis.internal.Api,
    config: Dict[str, Any]
)  None

5.4.3.2 - launch_add()

function launch_add

launch_add(
    uri: Optional[str] = None,
    job: Optional[str] = None,
    config: Optional[Dict[str, Any]] = None,
    template_variables: Optional[Dict[str, Union[float, int, str]]] = None,
    project: Optional[str] = None,
    entity: Optional[str] = None,
    queue_name: Optional[str] = None,
    resource: Optional[str] = None,
    entry_point: Optional[List[str]] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    docker_image: Optional[str] = None,
    project_queue: Optional[str] = None,
    resource_args: Optional[Dict[str, Any]] = None,
    run_id: Optional[str] = None,
    build: Optional[bool] = False,
    repository: Optional[str] = None,
    sweep_id: Optional[str] = None,
    author: Optional[str] = None,
    priority: Optional[int] = None
)  public.QueuedRun

Enqueue a W&B launch experiment. With either a source uri, job or docker_image.

Arguments:

  • uri: URI of experiment to run. A wandb run uri or a Git repository URI.
  • job: string reference to a wandb.Job eg: wandb/test/my-job:latest
  • config: A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”
  • template_variables: A dictionary containing values of template variables for a run queue.
  • Expected format of {“VAR_NAME”: VAR_VALUE}
  • project: Target project to send launched run to
  • entity: Target entity to send launched run to
  • queue: the name of the queue to enqueue the run to
  • priority: the priority level of the job, where 1 is the highest priority
  • resource: Execution backend for the run: W&B provides built-in support for “local-container” backend
  • entry_point: Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.
  • name: Name run under which to launch the run.
  • version: For Git-based projects, either a commit hash or a branch name.
  • docker_image: The name of the docker image to use for the run.
  • resource_args: Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args.
  • run_id: optional string indicating the id of the launched run
  • build: optional flag defaulting to false, requires queue to be set if build, an image is created, creates a job artifact, pushes a reference to that job artifact to queue
  • repository: optional string to control the name of the remote repository, used when pushing images to a registry
  • project_queue: optional string to control the name of the project for the queue. Primarily used for back compatibility with project scoped queues

Example:

from wandb.sdk.launch import launch_add

project_uri = "https://github.com/wandb/examples"
params = {"alpha": 0.5, "l1_ratio": 0.01}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch_add(uri=project_uri, parameters=params)

Returns: an instance ofwandb.api.public.QueuedRun which gives information about the queued run, or if wait_until_started or wait_until_finished are called, gives access to the underlying Run information.

Raises: wandb.exceptions.LaunchError if unsuccessful

5.4.3.3 - launch()

function launch

launch(
    api: wandb.apis.internal.Api,
    job: Optional[str] = None,
    entry_point: Optional[List[str]] = None,
    version: Optional[str] = None,
    name: Optional[str] = None,
    resource: Optional[str] = None,
    resource_args: Optional[Dict[str, Any]] = None,
    project: Optional[str] = None,
    entity: Optional[str] = None,
    docker_image: Optional[str] = None,
    config: Optional[Dict[str, Any]] = None,
    synchronous: Optional[bool] = True,
    run_id: Optional[str] = None,
    repository: Optional[str] = None
)  AbstractRun

Launch a W&B launch experiment.

Arguments:

  • job: string reference to a wandb.Job eg: wandb/test/my-job:latest
  • api: An instance of a wandb Api from wandb.apis.internal.
  • entry_point: Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.
  • version: For Git-based projects, either a commit hash or a branch name.
  • name: Name run under which to launch the run.
  • resource: Execution backend for the run.
  • resource_args: Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args.
  • project: Target project to send launched run to
  • entity: Target entity to send launched run to
  • config: A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”.
  • synchronous: Whether to block while waiting for a run to complete. Defaults to True. Note that if synchronous is False and backend is “local-container”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated. If synchronous is True and the run fails, the current process will error out as well.
  • run_id: ID for the run (To ultimately replace the :name: field)
  • repository: string name of repository path for remote registry

Example:

   from wandb.sdk.launch import launch

   job = "wandb/jobs/Hello World:latest"
   params = {"epochs": 5}
   # Run W&B project and create a reproducible docker environment
   # on a local host
   api = wandb.apis.internal.Api()
   launch(api, job, parameters=params)
   ``` 





**Returns:**
an instance of`wandb.launch.SubmittedRun` exposing information (e.g. run ID) about the launched run. 



**Raises:**
`wandb.exceptions.ExecutionError` If a run launched in blocking mode is unsuccessful. 

5.4.3.4 - LaunchAgent

class LaunchAgent

Launch agent class which polls run given run queues and launches runs for wandb launch.

method LaunchAgent.__init__

__init__(api: wandb.apis.internal.Api, config: Dict[str, Any])

Initialize a launch agent.

Arguments:

  • api: Api object to use for making requests to the backend.
  • config: Config dictionary for the agent.

property LaunchAgent.num_running_jobs

Return the number of jobs not including schedulers.


property LaunchAgent.num_running_schedulers

Return just the number of schedulers.


property LaunchAgent.thread_ids

Returns a list of keys running thread ids for the agent.


method LaunchAgent.check_sweep_state

check_sweep_state(
    launch_spec: Dict[str, Any],
    api: wandb.apis.internal.Api
)  None

Check the state of a sweep before launching a run for the sweep.


method LaunchAgent.fail_run_queue_item

fail_run_queue_item(
    run_queue_item_id: str,
    message: str,
    phase: str,
    files: Optional[List[str]] = None
)  None

method LaunchAgent.finish_thread_id

finish_thread_id(
    thread_id: int,
    exception: Optional[Exception, wandb.sdk.launch.errors.LaunchDockerError] = None
)  None

Removes the job from our list for now.


method LaunchAgent.get_job_and_queue

get_job_and_queue()  Optional[wandb.sdk.launch.agent.agent.JobSpecAndQueue]

classmethod LaunchAgent.initialized

initialized()  bool

Return whether the agent is initialized.


method LaunchAgent.loop

loop()  None

Loop infinitely to poll for jobs and run them.

Raises:

  • KeyboardInterrupt: if the agent is requested to stop.

classmethod LaunchAgent.name

name()  str

Return the name of the agent.


method LaunchAgent.pop_from_queue

pop_from_queue(queue: str)  Any

Pops an item off the runqueue to run as a job.

Arguments:

  • queue: Queue to pop from.

Returns: Item popped off the queue.

Raises:

  • Exception: if there is an error popping from the queue.

method LaunchAgent.print_status

print_status()  None

Prints the current status of the agent.


method LaunchAgent.run_job

run_job(
    job: Dict[str, Any],
    queue: str,
    file_saver: wandb.sdk.launch.agent.run_queue_item_file_saver.RunQueueItemFileSaver
)  None

Set up project and run the job.

Arguments:

  • job: Job to run.

method LaunchAgent.task_run_job

task_run_job(
    launch_spec: Dict[str, Any],
    job: Dict[str, Any],
    default_config: Dict[str, Any],
    api: wandb.apis.internal.Api,
    job_tracker: wandb.sdk.launch.agent.job_status_tracker.JobAndRunStatusTracker
)  None

method LaunchAgent.update_status

update_status(status: str)  None

Update the status of the agent.

Arguments:

  • status: Status to update the agent to.

5.4.3.5 - load_wandb_config()

function load_wandb_config

load_wandb_config()  Config

Load wandb config from WANDB_CONFIG environment variable(s).

The WANDB_CONFIG environment variable is a json string that can contain multiple config keys. The WANDB_CONFIG_[0-9]+ environment variables are used for environments where there is a limit on the length of environment variables. In that case, we shard the contents of WANDB_CONFIG into multiple environment variables numbered from 0.

Returns: A dictionary of wandb config values.

5.4.3.6 - manage_config_file()

function manage_config_file

manage_config_file(
    path: str,
    include: Optional[List[str]] = None,
    exclude: Optional[List[str]] = None,
    schema: Optional[Any] = None
)

Declare an overridable configuration file for a launch job.

If a new job version is created from the active run, the configuration file will be added to the job’s inputs. If the job is launched and overrides have been provided for the configuration file, this function will detect the overrides from the environment and update the configuration file on disk. Note that these overrides will only be applied in ephemeral containers. include and exclude are lists of dot separated paths with the config. The paths are used to filter subtrees of the configuration file out of the job’s inputs.

For example, given the following configuration file: yaml model: name: resnet layers: 18 training: epochs: 10 batch_size: 32

Passing include=['model'] will only include the model subtree in the job’s inputs. Passing exclude=['model.layers'] will exclude the layers key from the model subtree. Note that exclude takes precedence over include.

. is used as a separator for nested keys. If a key contains a ., it should be escaped with a backslash, e.g. include=[r'model\.layers']. Note the use of r to denote a raw string when using escape chars.

Args:

  • path (str): The path to the configuration file. This path must be relative and must not contain backwards traversal, i.e. ...
  • include (List[str]): A list of keys to include in the configuration file.
  • exclude (List[str]): A list of keys to exclude from the configuration file.
  • schema (dict | Pydantic model): A JSON Schema or Pydantic model describing describing which attributes will be editable from the Launch drawer. Accepts both an instance of a Pydantic BaseModel class or the BaseModel class itself.

Raises:

  • LaunchError: If the path is not valid, or if there is no active run.

5.4.3.7 - manage_wandb_config()

function manage_wandb_config

manage_wandb_config(
    include: Optional[List[str]] = None,
    exclude: Optional[List[str]] = None,
    schema: Optional[Any] = None
)

Declare wandb.config as an overridable configuration for a launch job.

If a new job version is created from the active run, the run config (wandb.config) will become an overridable input of the job. If the job is launched and overrides have been provided for the run config, the overrides will be applied to the run config when wandb.init is called. include and exclude are lists of dot separated paths with the config. The paths are used to filter subtrees of the configuration file out of the job’s inputs.

For example, given the following run config contents: yaml model: name: resnet layers: 18 training: epochs: 10 batch_size: 32 Passing include=['model'] will only include the model subtree in the job’s inputs. Passing exclude=['model.layers'] will exclude the layers key from the model subtree. Note that exclude takes precedence over include. . is used as a separator for nested keys. If a key contains a ., it should be escaped with a backslash, e.g. include=[r'model\.layers']. Note the use of r to denote a raw string when using escape chars.

Args:

  • include (List[str]): A list of subtrees to include in the configuration.
  • exclude (List[str]): A list of subtrees to exclude from the configuration.
  • schema (dict | Pydantic model): A JSON Schema or Pydantic model describing describing which attributes will be editable from the Launch drawer. Accepts both an instance of a Pydantic BaseModel class or the BaseModel class itself.

Raises:

  • LaunchError: If there is no active run.

6 - Query Expression Language

Use the query expressions to select and aggregate data across runs and projects. Learn more about query panels.

Data Types

6.1 - artifact

Chainable Ops

Returns the url for an artifact

Argument
artifact An artifact

Return Value

The url for an artifact

artifact-name

Returns the name of the artifact

Argument
artifact An artifact

Return Value

The name of the artifact

artifact-versions

Returns the versions of the artifact

Argument
artifact An artifact

Return Value

The versions of the artifact

List Ops

Returns the url for an artifact

Argument
artifact An artifact

Return Value

The url for an artifact

artifact-name

Returns the name of the artifact

Argument
artifact An artifact

Return Value

The name of the artifact

artifact-versions

Returns the versions of the artifact

Argument
artifact An artifact

Return Value

The versions of the artifact

6.2 - artifactType

Chainable Ops

artifactType-artifactVersions

Returns the artifactVersions of all artifacts of the artifactType

Argument
artifactType A artifactType

Return Value

The artifactVersions of all artifacts of the artifactType

artifactType-artifacts

Returns the artifacts of the artifactType

Argument
artifactType An artifactType

Return Value

The artifacts of the artifactType

artifactType-name

Returns the name of the artifactType

Argument
artifactType A artifactType

Return Value

The name of the artifactType

List Ops

artifactType-artifactVersions

Returns the artifactVersions of all artifacts of the artifactType

Argument
artifactType A artifactType

Return Value

The artifactVersions of all artifacts of the artifactType

artifactType-artifacts

Returns the artifacts of the artifactType

Argument
artifactType A artifactType

Return Value

The artifacts of the artifactType

artifactType-name

Returns the name of the artifactType

Argument
artifactType A artifactType

Return Value

The name of the artifactType

6.3 - artifactVersion

Chainable Ops

artifactVersion-aliases

Returns the aliases for an artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The aliases for an artifactVersion

artifactVersion-createdAt

Returns the datetime at which the artifactVersion was created

Argument
artifactVersion An artifactVersion

Return Value

The datetime at which the artifactVersion was created

artifactVersion-file

Returns the file of the artifactVersion for the given path

Argument
artifactVersion An artifactVersion
path The path of the file

Return Value

The file of the artifactVersion for the given path

artifactVersion-files

Returns the list of files of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The list of files of the artifactVersion

Returns the url for an artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The url for an artifactVersion

artifactVersion-metadata

Returns the artifactVersion metadata dictionary

Argument
artifactVersion An artifactVersion

Return Value

The artifactVersion metadata dictionary

artifactVersion-name

Returns the name of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The name of the artifactVersion

artifactVersion-size

Returns the size of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The size of the artifactVersion

artifactVersion-usedBy

Returns the runs that use the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The runs that use the artifactVersion

artifactVersion-versionId

Returns the versionId of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The versionId of the artifactVersion

List Ops

artifactVersion-aliases

Returns the aliases for an artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The aliases for an artifactVersion

artifactVersion-createdAt

Returns the datetime at which the artifactVersion was created

Argument
artifactVersion An artifactVersion

Return Value

The datetime at which the artifactVersion was created

artifactVersion-file

Returns the file of the artifactVersion for the given path

Argument
artifactVersion An artifactVersion
path The path of the file

Return Value

The file of the artifactVersion for the given path

artifactVersion-files

Returns the list of files of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The list of files of the artifactVersion

Returns the url for an artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The url for an artifactVersion

artifactVersion-metadata

Returns the artifactVersion metadata dictionary

Argument
artifactVersion An artifactVersion

Return Value

The artifactVersion metadata dictionary

artifactVersion-name

Returns the name of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The name of the artifactVersion

artifactVersion-size

Returns the size of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The size of the artifactVersion

artifactVersion-usedBy

Returns the runs that use the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The runs that use the artifactVersion

artifactVersion-versionId

Returns the versionId of the artifactVersion

Argument
artifactVersion An artifactVersion

Return Value

The versionId of the artifactVersion

6.4 - audio-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.5 - bokeh-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.6 - boolean

Chainable Ops

and

Returns the logical and of the two values

Argument
lhs First binary value
rhs Second binary value

Return Value

The logical and of the two values

or

Returns the logical or of the two values

Argument
lhs First binary value
rhs Second binary value

Return Value

The logical or of the two values

boolean-not

Returns the logical inverse of the value

Argument
bool The boolean value

Return Value

The logical inverse of the value

boolean-not

Returns the logical inverse of the value

Argument
bool The boolean value

Return Value

The logical inverse of the value

List Ops

and

Returns the logical and of the two values

Argument
lhs First binary value
rhs Second binary value

Return Value

The logical and of the two values

or

Returns the logical or of the two values

Argument
lhs First binary value
rhs Second binary value

Return Value

The logical or of the two values

boolean-not

Returns the logical inverse of the value

Argument
bool The boolean value

Return Value

The logical inverse of the value

boolean-not

Returns the logical inverse of the value

Argument
bool The boolean value

Return Value

The logical inverse of the value

6.7 - entity

Chainable Ops

Returns the link of the entity

Argument
entity A entity

Return Value

The link of the entity

entity-name

Returns the name of the entity

Argument
entity A entity

Return Value

The name of the entity

List Ops

Returns the link of the entity

Argument
entity A entity

Return Value

The link of the entity

entity-name

Returns the name of the entity

Argument
entity A entity

Return Value

The name of the entity

6.8 - file

Chainable Ops

file-contents

Returns the contents of the file

Argument
file A file

Return Value

The contents of the file

file-digest

Returns the digest of the file

Argument
file A file

Return Value

The digest of the file

file-size

Returns the size of the file

Argument
file A file

Return Value

The size of the file

file-table

Returns the contents of the file as a table

Argument
file A file

Return Value

The contents of the file as a table

List Ops

file-contents

Returns the contents of the file

Argument
file A file

Return Value

The contents of the file

file-digest

Returns the digest of the file

Argument
file A file

Return Value

The digest of the file

file-size

Returns the size of the file

Argument
file A file

Return Value

The size of the file

file-table

Returns the contents of the file as a table

Argument
file A file

Return Value

The contents of the file as a table

6.9 - float

Chainable Ops

number-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

number-modulo

Divide a number by another and return remainder

Argument
lhs number to divide
rhs number to divide by

Return Value

Modulo of two numbers

number-mult

Multiply two numbers

Argument
lhs First number
rhs Second number

Return Value

Product of two numbers

number-powBinary

Raise a number to an exponent

Argument
lhs Base number
rhs Exponent number

Return Value

The base numbers raised to nth power

number-add

Add two numbers

Argument
lhs First number
rhs Second number

Return Value

Sum of two numbers

number-sub

Subtract a number from another

Argument
lhs number to subtract from
rhs number to subtract

Return Value

Difference of two numbers

number-div

Divide a number by another

Argument
lhs number to divide
rhs number to divide by

Return Value

Quotient of two numbers

number-less

Check if a number is less than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than the second

number-lessEqual

Check if a number is less than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than or equal to the second

number-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

number-greater

Check if a number is greater than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than the second

number-greaterEqual

Check if a number is greater than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than or equal to the second

number-negate

Negate a number

Argument
val Number to negate

Return Value

A number

number-toString

Convert a number to a string

Argument
in Number to convert

Return Value

String representation of the number

number-toTimestamp

Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.

Argument
val Number to convert to a timestamp

Return Value

Timestamp

number-abs

Calculates the absolute value of a number

Argument
n A number

Return Value

The absolute value of the number

List Ops

number-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

number-modulo

Divide a number by another and return remainder

Argument
lhs number to divide
rhs number to divide by

Return Value

Modulo of two numbers

number-mult

Multiply two numbers

Argument
lhs First number
rhs Second number

Return Value

Product of two numbers

number-powBinary

Raise a number to an exponent

Argument
lhs Base number
rhs Exponent number

Return Value

The base numbers raised to nth power

number-add

Add two numbers

Argument
lhs First number
rhs Second number

Return Value

Sum of two numbers

number-sub

Subtract a number from another

Argument
lhs number to subtract from
rhs number to subtract

Return Value

Difference of two numbers

number-div

Divide a number by another

Argument
lhs number to divide
rhs number to divide by

Return Value

Quotient of two numbers

number-less

Check if a number is less than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than the second

number-lessEqual

Check if a number is less than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than or equal to the second

number-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

number-greater

Check if a number is greater than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than the second

number-greaterEqual

Check if a number is greater than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than or equal to the second

number-negate

Negate a number

Argument
val Number to negate

Return Value

A number

numbers-argmax

Finds the index of maximum number

Argument
numbers list of numbers to find the index of maximum number

Return Value

Index of maximum number

numbers-argmin

Finds the index of minimum number

Argument
numbers list of numbers to find the index of minimum number

Return Value

Index of minimum number

numbers-avg

Average of numbers

Argument
numbers list of numbers to average

Return Value

Average of numbers

numbers-max

Maximum number

Argument
numbers list of numbers to find the maximum number

Return Value

Maximum number

numbers-min

Minimum number

Argument
numbers list of numbers to find the minimum number

Return Value

Minimum number

numbers-stddev

Standard deviation of numbers

Argument
numbers list of numbers to calculate the standard deviation

Return Value

Standard deviation of numbers

numbers-sum

Sum of numbers

Argument
numbers list of numbers to sum

Return Value

Sum of numbers

number-toString

Convert a number to a string

Argument
in Number to convert

Return Value

String representation of the number

number-toTimestamp

Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.

Argument
val Number to convert to a timestamp

Return Value

Timestamp

number-abs

Calculates the absolute value of a number

Argument
n A number

Return Value

The absolute value of the number

6.10 - html-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.11 - image-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.12 - int

Chainable Ops

number-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

number-modulo

Divide a number by another and return remainder

Argument
lhs number to divide
rhs number to divide by

Return Value

Modulo of two numbers

number-mult

Multiply two numbers

Argument
lhs First number
rhs Second number

Return Value

Product of two numbers

number-powBinary

Raise a number to an exponent

Argument
lhs Base number
rhs Exponent number

Return Value

The base numbers raised to nth power

number-add

Add two numbers

Argument
lhs First number
rhs Second number

Return Value

Sum of two numbers

number-sub

Subtract a number from another

Argument
lhs number to subtract from
rhs number to subtract

Return Value

Difference of two numbers

number-div

Divide a number by another

Argument
lhs number to divide
rhs number to divide by

Return Value

Quotient of two numbers

number-less

Check if a number is less than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than the second

number-lessEqual

Check if a number is less than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than or equal to the second

number-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

number-greater

Check if a number is greater than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than the second

number-greaterEqual

Check if a number is greater than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than or equal to the second

number-negate

Negate a number

Argument
val Number to negate

Return Value

A number

number-toString

Convert a number to a string

Argument
in Number to convert

Return Value

String representation of the number

number-toTimestamp

Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.

Argument
val Number to convert to a timestamp

Return Value

Timestamp

number-abs

Calculates the absolute value of a number

Argument
n A number

Return Value

The absolute value of the number

List Ops

number-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

number-modulo

Divide a number by another and return remainder

Argument
lhs number to divide
rhs number to divide by

Return Value

Modulo of two numbers

number-mult

Multiply two numbers

Argument
lhs First number
rhs Second number

Return Value

Product of two numbers

number-powBinary

Raise a number to an exponent

Argument
lhs Base number
rhs Exponent number

Return Value

The base numbers raised to nth power

number-add

Add two numbers

Argument
lhs First number
rhs Second number

Return Value

Sum of two numbers

number-sub

Subtract a number from another

Argument
lhs number to subtract from
rhs number to subtract

Return Value

Difference of two numbers

number-div

Divide a number by another

Argument
lhs number to divide
rhs number to divide by

Return Value

Quotient of two numbers

number-less

Check if a number is less than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than the second

number-lessEqual

Check if a number is less than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than or equal to the second

number-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

number-greater

Check if a number is greater than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than the second

number-greaterEqual

Check if a number is greater than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than or equal to the second

number-negate

Negate a number

Argument
val Number to negate

Return Value

A number

numbers-argmax

Finds the index of maximum number

Argument
numbers list of numbers to find the index of maximum number

Return Value

Index of maximum number

numbers-argmin

Finds the index of minimum number

Argument
numbers list of numbers to find the index of minimum number

Return Value

Index of minimum number

numbers-avg

Average of numbers

Argument
numbers list of numbers to average

Return Value

Average of numbers

numbers-max

Maximum number

Argument
numbers list of numbers to find the maximum number

Return Value

Maximum number

numbers-min

Minimum number

Argument
numbers list of numbers to find the minimum number

Return Value

Minimum number

numbers-stddev

Standard deviation of numbers

Argument
numbers list of numbers to calculate the standard deviation

Return Value

Standard deviation of numbers

numbers-sum

Sum of numbers

Argument
numbers list of numbers to sum

Return Value

Sum of numbers

number-toString

Convert a number to a string

Argument
in Number to convert

Return Value

String representation of the number

number-toTimestamp

Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.

Argument
val Number to convert to a timestamp

Return Value

Timestamp

number-abs

Calculates the absolute value of a number

Argument
n A number

Return Value

The absolute value of the number

6.13 - joined-table

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

joinedtable-file

Returns the file of a joined-table

Argument
joinedTable The joined-table

Return Value

The file of a joined-table

joinedtable-rows

Returns the rows of a joined-table

Argument
joinedTable The joined-table
leftOuter Whether to include rows from the left table that do not have a matching row in the right table
rightOuter Whether to include rows from the right table that do not have a matching row in the left table

Return Value

The rows of the joined-table

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.14 - molecule-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.15 - number

Chainable Ops

number-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

number-modulo

Divide a number by another and return remainder

Argument
lhs number to divide
rhs number to divide by

Return Value

Modulo of two numbers

number-mult

Multiply two numbers

Argument
lhs First number
rhs Second number

Return Value

Product of two numbers

number-powBinary

Raise a number to an exponent

Argument
lhs Base number
rhs Exponent number

Return Value

The base numbers raised to nth power

number-add

Add two numbers

Argument
lhs First number
rhs Second number

Return Value

Sum of two numbers

number-sub

Subtract a number from another

Argument
lhs number to subtract from
rhs number to subtract

Return Value

Difference of two numbers

number-div

Divide a number by another

Argument
lhs number to divide
rhs number to divide by

Return Value

Quotient of two numbers

number-less

Check if a number is less than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than the second

number-lessEqual

Check if a number is less than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than or equal to the second

number-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

number-greater

Check if a number is greater than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than the second

number-greaterEqual

Check if a number is greater than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than or equal to the second

number-negate

Negate a number

Argument
val Number to negate

Return Value

A number

number-toString

Convert a number to a string

Argument
in Number to convert

Return Value

String representation of the number

number-toTimestamp

Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.

Argument
val Number to convert to a timestamp

Return Value

Timestamp

number-abs

Calculates the absolute value of a number

Argument
n A number

Return Value

The absolute value of the number

List Ops

number-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

number-modulo

Divide a number by another and return remainder

Argument
lhs number to divide
rhs number to divide by

Return Value

Modulo of two numbers

number-mult

Multiply two numbers

Argument
lhs First number
rhs Second number

Return Value

Product of two numbers

number-powBinary

Raise a number to an exponent

Argument
lhs Base number
rhs Exponent number

Return Value

The base numbers raised to nth power

number-add

Add two numbers

Argument
lhs First number
rhs Second number

Return Value

Sum of two numbers

number-sub

Subtract a number from another

Argument
lhs number to subtract from
rhs number to subtract

Return Value

Difference of two numbers

number-div

Divide a number by another

Argument
lhs number to divide
rhs number to divide by

Return Value

Quotient of two numbers

number-less

Check if a number is less than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than the second

number-lessEqual

Check if a number is less than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is less than or equal to the second

number-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

number-greater

Check if a number is greater than another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than the second

number-greaterEqual

Check if a number is greater than or equal to another

Argument
lhs number to compare
rhs number to compare to

Return Value

Whether the first number is greater than or equal to the second

number-negate

Negate a number

Argument
val Number to negate

Return Value

A number

numbers-argmax

Finds the index of maximum number

Argument
numbers list of numbers to find the index of maximum number

Return Value

Index of maximum number

numbers-argmin

Finds the index of minimum number

Argument
numbers list of numbers to find the index of minimum number

Return Value

Index of minimum number

numbers-avg

Average of numbers

Argument
numbers list of numbers to average

Return Value

Average of numbers

numbers-max

Maximum number

Argument
numbers list of numbers to find the maximum number

Return Value

Maximum number

numbers-min

Minimum number

Argument
numbers list of numbers to find the minimum number

Return Value

Minimum number

numbers-stddev

Standard deviation of numbers

Argument
numbers list of numbers to calculate the standard deviation

Return Value

Standard deviation of numbers

numbers-sum

Sum of numbers

Argument
numbers list of numbers to sum

Return Value

Sum of numbers

number-toString

Convert a number to a string

Argument
in Number to convert

Return Value

String representation of the number

number-toTimestamp

Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.

Argument
val Number to convert to a timestamp

Return Value

Timestamp

number-abs

Calculates the absolute value of a number

Argument
n A number

Return Value

The absolute value of the number

6.16 - object3D-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.17 - partitioned-table

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

partitionedtable-file

Returns the file of a partitioned-table

Argument
partitionedTable The partitioned-table

Return Value

file of the partitioned-table

partitionedtable-rows

Returns the rows of a partitioned-table

Argument
partitionedTable The partitioned-table to get rows from

Return Value

Rows of the partitioned-table

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.18 - project

Chainable Ops

project-artifact

Returns the artifact for a given name within a project

Argument
project A project
artifactName The name of the artifact

Return Value

The artifact for a given name within a project

project-artifactType

Returns the [artifactType](artifact-type.md for a given name within a project

Argument
project A project
artifactType The name of the [artifactType](artifact-type.md

Return Value

The [artifactType](artifact-type.md for a given name within a project

project-artifactTypes

Returns the [artifactTypes](artifact-type.md for a project

Argument
project A project

Return Value

The [artifactTypes](artifact-type.md for a project

project-artifactVersion

Returns the [artifactVersion](artifact-version.md for a given name and version within a project

Argument
project A project
artifactName The name of the [artifactVersion](artifact-version.md
artifactVersionAlias The version alias of the [artifactVersion](artifact-version.md

Return Value

The [artifactVersion](artifact-version.md for a given name and version within a project

project-createdAt

Returns the creation time of the project

Argument
project A project

Return Value

The creation time of the project

project-name

Returns the name of the project

Argument
project A project

Return Value

The name of the project

project-runs

Returns the runs from a project

Argument
project A project

Return Value

The runs from a project

List Ops

project-artifact

Returns the artifact for a given name within a project

Argument
project A project
artifactName The name of the artifact

Return Value

The artifact for a given name within a project

project-artifactType

Returns the [artifactType](artifact-type.md for a given name within a project

Argument
project A project
artifactType The name of the [artifactType](artifact-type.md

Return Value

The [artifactType](artifact-type.md for a given name within a project

project-artifactTypes

Returns the [artifactTypes](artifact-type.md for a project

Argument
project A project

Return Value

The [artifactTypes](artifact-type.md for a project

project-artifactVersion

Returns the [artifactVersion](artifact-version.md for a given name and version within a project

Argument
project A project
artifactName The name of the [artifactVersion](artifact-version.md
artifactVersionAlias The version alias of the [artifactVersion](artifact-version.md

Return Value

The [artifactVersion](artifact-version.md for a given name and version within a project

project-createdAt

Returns the creation time of the project

Argument
project A project

Return Value

The creation time of the project

project-name

Returns the name of the project

Argument
project A project

Return Value

The name of the project

project-runs

Returns the runs from a project

Argument
project A project

Return Value

The runs from a project

6.19 - pytorch-model-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

6.20 - run

Chainable Ops

run-config

Returns the config typedDict of the run

Argument
run A run

Return Value

The config typedDict of the run

run-createdAt

Returns the created at datetime of the run

Argument
run A run

Return Value

The created at datetime of the run

run-heartbeatAt

Returns the last heartbeat datetime of the run

Argument
run A run

Return Value

The last heartbeat datetime of the run

run-history

Returns the log history of the run

Argument
run A run

Return Value

The log history of the run

run-jobType

Returns the job type of the run

Argument
run A run

Return Value

The job type of the run

run-loggedArtifactVersion

Returns the artifactVersion logged by the run for a given name and alias

Argument
run A run
artifactVersionName The name:alias of the artifactVersion

Return Value

The artifactVersion logged by the run for a given name and alias

run-loggedArtifactVersions

Returns all of the artifactVersions logged by the run

Argument
run A run

Return Value

The artifactVersions logged by the run

run-name

Returns the name of the run

Argument
run A run

Return Value

The name of the run

run-runtime

Returns the runtime in seconds of the run

Argument
run A run

Return Value

The runtime in seconds of the run

run-summary

Returns the summary typedDict of the run

Argument
run A run

Return Value

The summary typedDict of the run

run-usedArtifactVersions

Returns all of the artifactVersions used by the run

Argument
run A run

Return Value

The artifactVersions used by the run

run-user

Returns the user of the run

Argument
run A run

Return Value

The user of the run

List Ops

run-config

Returns the config typedDict of the run

Argument
run A run

Return Value

The config typedDict of the run

run-createdAt

Returns the created at datetime of the run

Argument
run A run

Return Value

The created at datetime of the run

run-heartbeatAt

Returns the last heartbeat datetime of the run

Argument
run A run

Return Value

The last heartbeat datetime of the run

run-history

Returns the log history of the run

Argument
run A run

Return Value

The log history of the run

run-jobType

Returns the job type of the run

Argument
run A run

Return Value

The job type of the run

run-loggedArtifactVersion

Returns the artifactVersion logged by the run for a given name and alias

Argument
run A run
artifactVersionName The name:alias of the artifactVersion

Return Value

The artifactVersion logged by the run for a given name and alias

run-loggedArtifactVersions

Returns all of the artifactVersions logged by the run

Argument
run A run

Return Value

The artifactVersions logged by the run

run-name

Returns the name of the run

Argument
run A run

Return Value

The name of the run

run-runtime

Returns the runtime in seconds of the run

Argument
run A run

Return Value

The runtime in seconds of the run

run-summary

Returns the summary typedDict of the run

Argument
run A run

Return Value

The summary typedDict of the run

run-usedArtifactVersions

Returns all of the artifactVersions used by the run

Argument
run A run

Return Value

The artifactVersions used by the run

6.21 - string

Chainable Ops

string-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

string-add

Concatenates two strings

Argument
lhs The first string
rhs The second string

Return Value

The concatenated string

string-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

string-append

Appends a suffix to a string

Argument
str The string to append to
suffix The suffix to append

Return Value

The string with the suffix appended

string-contains

Checks if a string contains a substring

Argument
str The string to check
sub The substring to check for

Return Value

Whether the string contains the substring

string-endsWith

Checks if a string ends with a suffix

Argument
str The string to check
suffix The suffix to check for

Return Value

Whether the string ends with the suffix

string-findAll

Finds all occurrences of a substring in a string

Argument
str The string to find occurrences of the substring in
sub The substring to find

Return Value

The list of indices of the substring in the string

string-isAlnum

Checks if a string is alphanumeric

Argument
str The string to check

Return Value

Whether the string is alphanumeric

string-isAlpha

Checks if a string is alphabetic

Argument
str The string to check

Return Value

Whether the string is alphabetic

string-isNumeric

Checks if a string is numeric

Argument
str The string to check

Return Value

Whether the string is numeric

string-lStrip

Strip leading whitespace

Argument
str The string to strip.

Return Value

The stripped string.

string-len

Returns the length of a string

Argument
str The string to check

Return Value

The length of the string

string-lower

Converts a string to lowercase

Argument
str The string to convert to lowercase

Return Value

The lowercase string

string-partition

Partitions a string into a list of the strings

Argument
str The string to split
sep The separator to split on

Return Value

A list of strings: the string before the separator, the separator, and the string after the separator

string-prepend

Prepends a prefix to a string

Argument
str The string to prepend to
prefix The prefix to prepend

Return Value

The string with the prefix prepended

string-rStrip

Strip trailing whitespace

Argument
str The string to strip.

Return Value

The stripped string.

string-replace

Replaces all occurrences of a substring in a string

Argument
str The string to replace contents of
sub The substring to replace
newSub The substring to replace the old substring with

Return Value

The string with the replacements

string-slice

Slices a string into a substring based on beginning and end indices

Argument
str The string to slice
begin The beginning index of the substring
end The ending index of the substring

Return Value

The substring

string-split

Splits a string into a list of strings

Argument
str The string to split
sep The separator to split on

Return Value

The list of strings

string-startsWith

Checks if a string starts with a prefix

Argument
str The string to check
prefix The prefix to check for

Return Value

Whether the string starts with the prefix

string-strip

Strip whitespace from both ends of a string.

Argument
str The string to strip.

Return Value

The stripped string.

string-upper

Converts a string to uppercase

Argument
str The string to convert to uppercase

Return Value

The uppercase string

string-levenshtein

Calculates the Levenshtein distance between two strings.

Argument
str1 The first string.
str2 The second string.

Return Value

The Levenshtein distance between the two strings.

List Ops

string-notEqual

Determines inequality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are not equal.

string-add

Concatenates two strings

Argument
lhs The first string
rhs The second string

Return Value

The concatenated string

string-equal

Determines equality of two values.

Argument
lhs The first value to compare.
rhs The second value to compare.

Return Value

Whether the two values are equal.

string-append

Appends a suffix to a string

Argument
str The string to append to
suffix The suffix to append

Return Value

The string with the suffix appended

string-contains

Checks if a string contains a substring

Argument
str The string to check
sub The substring to check for

Return Value

Whether the string contains the substring

string-endsWith

Checks if a string ends with a suffix

Argument
str The string to check
suffix The suffix to check for

Return Value

Whether the string ends with the suffix

string-findAll

Finds all occurrences of a substring in a string

Argument
str The string to find occurrences of the substring in
sub The substring to find

Return Value

The list of indices of the substring in the string

string-isAlnum

Checks if a string is alphanumeric

Argument
str The string to check

Return Value

Whether the string is alphanumeric

string-isAlpha

Checks if a string is alphabetic

Argument
str The string to check

Return Value

Whether the string is alphabetic

string-isNumeric

Checks if a string is numeric

Argument
str The string to check

Return Value

Whether the string is numeric

string-lStrip

Strip leading whitespace

Argument
str The string to strip.

Return Value

The stripped string.

string-len

Returns the length of a string

Argument
str The string to check

Return Value

The length of the string

string-lower

Converts a string to lowercase

Argument
str The string to convert to lowercase

Return Value

The lowercase string

string-partition

Partitions a string into a list of the strings

Argument
str The string to split
sep The separator to split on

Return Value

A list of strings: the string before the separator, the separator, and the string after the separator

string-prepend

Prepends a prefix to a string

Argument
str The string to prepend to
prefix The prefix to prepend

Return Value

The string with the prefix prepended

string-rStrip

Strip trailing whitespace

Argument
str The string to strip.

Return Value

The stripped string.

string-replace

Replaces all occurrences of a substring in a string

Argument
str The string to replace contents of
sub The substring to replace
newSub The substring to replace the old substring with

Return Value

The string with the replacements

string-slice

Slices a string into a substring based on beginning and end indices

Argument
str The string to slice
begin The beginning index of the substring
end The ending index of the substring

Return Value

The substring

string-split

Splits a string into a list of strings

Argument
str The string to split
sep The separator to split on

Return Value

The list of strings

string-startsWith

Checks if a string starts with a prefix

Argument
str The string to check
prefix The prefix to check for

Return Value

Whether the string starts with the prefix

string-strip

Strip whitespace from both ends of a string.

Argument
str The string to strip.

Return Value

The stripped string.

string-upper

Converts a string to uppercase

Argument
str The string to convert to uppercase

Return Value

The uppercase string

string-levenshtein

Calculates the Levenshtein distance between two strings.

Argument
str1 The first string.
str2 The second string.

Return Value

The Levenshtein distance between the two strings.

6.22 - table

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

table-rows

Returns the rows of a table

Argument
table A table

Return Value

The rows of the table

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

table-rows

Returns the rows of a table

Argument
table A table

Return Value

The rows of the table

6.23 - user

Chainable Ops

user-username

Returns the username of the user

Argument
user A user

Return Value

The username of the user

List Ops

user-username

Returns the username of the user

Argument
user A user

Return Value

The username of the user

6.24 - video-file

Chainable Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset

List Ops

asset-file

Returns the file of the asset

Argument
asset The asset

Return Value

The file of the asset