Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
This is the multi-page printable view of this section. Click here to print.
Actions
- 1: agent
- 2: Api
- 3: Artifact
- 4: controller
- 5: define_metric
- 6: Error
- 7: finish
- 8: init
- 9: link_model
- 10: log
- 11: log_artifact
- 12: log_model
- 13: login
- 14: plot_table
- 15: save
- 16: setup
- 17: sweep
- 18: teardown
- 19: termerror
- 20: termlog
- 21: termsetup
- 22: termwarn
- 23: unwatch
- 24: use_artifact
- 25: use_model
- 26: watch
1 - agent
function agent
agent(
sweep_id: str,
function: Optional[Callable] = None,
entity: Optional[str] = None,
project: Optional[str] = None,
count: Optional[int] = None
) → None
Start one or more sweep agents.
The sweep agent uses the sweep_id
to know which sweep it is a part of, what function to execute, and (optionally) how many agents to run.
Args:
sweep_id
: The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK.function
: A function to call instead of the “program” specified in the sweep config.entity
: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.project
: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled “Uncategorized”.count
: The number of sweep config trials to try.
2 - Api
class Api
Used for querying the wandb server.
Examples:
Most common way to initialize wandb.Api()
Args:
-
overrides
: (dict) You can setbase_url
if you are using a wandb server -
other than https
: //api.wandb.ai. You can also set defaults forentity
,project
, andrun
.
method Api.__init__
__init__(
overrides: Optional[Dict[str, Any]] = None,
timeout: Optional[int] = None,
api_key: Optional[str] = None
) → None
property Api.api_key
property Api.client
property Api.default_entity
property Api.user_agent
property Api.viewer
method Api.artifact
artifact(name: str, type: Optional[str] = None)
Return a single artifact by parsing path in the form project/name
or entity/project/name
.
Args:
name
: (str) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms:name
: versionname
: aliastype
: (str, optional) The type of artifact to fetch.
Returns:
An Artifact
object.
Raises:
ValueError
: If the artifact name is not specified.ValueError
: If the artifact type is specified but does not match the type of the fetched artifact.
Note:
This method is intended for external use only. Do not call
api.artifact()
within the wandb repository code.
method Api.artifact_collection
artifact_collection(type_name: str, name: str) → public.ArtifactCollection
Return a single artifact collection by type and parsing path in the form entity/project/name
.
Args:
type_name
: (str) The type of artifact collection to fetch.name
: (str) An artifact collection name. May be prefixed with entity/project.
Returns:
An ArtifactCollection
object.
method Api.artifact_collection_exists
artifact_collection_exists(name: str, type: str) → bool
Return whether an artifact collection exists within a specified project and entity.
Args:
name
: (str) An artifact collection name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”.type
: (str) The type of artifact collection
Returns: True if the artifact collection exists, False otherwise.
method Api.artifact_collections
artifact_collections(
project_name: str,
type_name: str,
per_page: Optional[int] = 50
) → public.ArtifactCollections
Return a collection of matching artifact collections.
Args:
project_name
: (str) The name of the project to filter on.type_name
: (str) The name of the artifact type to filter on.per_page
: (int, optional) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this.
Returns:
An iterable ArtifactCollections
object.
method Api.artifact_exists
artifact_exists(name: str, type: Optional[str] = None) → bool
Return whether an artifact version exists within a specified project and entity.
Args:
name
: (str) An artifact name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”. Valid names can be in the following forms:name
: versionname
: aliastype
: (str, optional) The type of artifact
Returns: True if the artifact version exists, False otherwise.
method Api.artifact_type
artifact_type(
type_name: str,
project: Optional[str] = None
) → public.ArtifactType
Return the matching ArtifactType
.
Args:
type_name
: (str) The name of the artifact type to retrieve.project
: (str, optional) If given, a project name or path to filter on.
Returns:
An ArtifactType
object.
method Api.artifact_types
artifact_types(project: Optional[str] = None) → public.ArtifactTypes
Return a collection of matching artifact types.
Args:
project
: (str, optional) If given, a project name or path to filter on.
Returns:
An iterable ArtifactTypes
object.
method Api.artifact_versions
artifact_versions(type_name, name, per_page=50)
Deprecated, use artifacts(type_name, name)
instead.
method Api.artifacts
artifacts(
type_name: str,
name: str,
per_page: Optional[int] = 50,
tags: Optional[List[str]] = None
) → public.Artifacts
Return an Artifacts
collection from the given parameters.
Args:
type_name
: (str) The type of artifacts to fetch.name
: (str) An artifact collection name. May be prefixed with entity/project.per_page
: (int, optional) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this.tags
: (list[str], optional) Only return artifacts with all of these tags.
Returns:
An iterable Artifacts
object.
method Api.create_project
create_project(name: str, entity: str) → None
Create a new project.
Args:
name
: (str) The name of the new project.entity
: (str) The entity of the new project.
method Api.create_run
create_run(
run_id: Optional[str] = None,
project: Optional[str] = None,
entity: Optional[str] = None
) → public.Run
Create a new run.
Args:
run_id
: (str, optional) The ID to assign to the run, if given. The run ID is automatically generated by default, so in general, you do not need to specify this and should only do so at your own risk.project
: (str, optional) If given, the project of the new run.entity
: (str, optional) If given, the entity of the new run.
Returns:
The newly created Run
.
method Api.create_run_queue
create_run_queue(
name: str,
type: 'public.RunQueueResourceType',
entity: Optional[str] = None,
prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None,
config: Optional[dict] = None,
template_variables: Optional[dict] = None
) → public.RunQueue
Create a new run queue (launch).
Args:
name
: (str) Name of the queue to createtype
: (str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.entity
: (str) Optional name of the entity to create the queue. If None, will use the configured or default entity.prioritization_mode
: (str) Optional version of prioritization to use. Either “V0” or Noneconfig
: (dict) Optional default resource configuration to be used for the queue. Use handlebars (eg.{{var}}
) to specify template variables.template_variables
: (dict) A dictionary of template variable schemas to be used with the config. Expected format of: `{"var-name"
: {"schema"
: {"type"
: (“string”, “number”, or “integer”),"default"
: (optional value),"minimum"
: (optional minimum),"maximum"
: (optional maximum),"enum"
: […"(options)"] } } }`
Returns:
The newly created RunQueue
Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors
method Api.create_team
create_team(team, admin_username=None)
Create a new team.
Args:
team
: (str) The name of the teamadmin_username
: (str) optional username of the admin user of the team, defaults to the current user.
Returns:
A Team
object
method Api.create_user
create_user(email, admin=False)
Create a new user.
Args:
email
: (str) The email address of the useradmin
: (bool) Whether this user should be a global instance admin
Returns:
A User
object
method Api.flush
flush()
Flush the local cache.
The api object keeps a local cache of runs, so if the state of the run may change while executing your script you must clear the local cache with api.flush()
to get the latest values associated with the run.
method Api.from_path
from_path(path)
Return a run, sweep, project or report from a path.
Examples:
project = api.from_path("my_project")
team_project = api.from_path("my_team/my_project")
run = api.from_path("my_team/my_project/runs/id")
sweep = api.from_path("my_team/my_project/sweeps/id")
report = api.from_path("my_team/my_project/reports/My-Report-Vm11dsdf")
```
**Args:**
- `path`: (str) The path to the project, run, sweep or report
**Returns:**
A `Project`, `Run`, `Sweep`, or `BetaReport` instance.
**Raises:**
wandb.Error if path is invalid or the object doesn't exist
---
### <kbd>method</kbd> `Api.job`
```python
job(name: Optional[str], path: Optional[str] = None) → public.Job
Return a Job
from the given parameters.
Args:
name
: (str) The job name.path
: (str, optional) If given, the root path in which to download the job artifact.
Returns:
A Job
object.
method Api.list_jobs
list_jobs(entity: str, project: str) → List[Dict[str, Any]]
Return a list of jobs, if any, for the given entity and project.
Args:
entity
: (str) The entity for the listed job(s).project
: (str) The project for the listed job(s).
Returns: A list of matching jobs.
method Api.project
project(name: str, entity: Optional[str] = None) → public.Project
Return the Project
with the given name (and entity, if given).
Args:
name
: (str) The project name.entity
: (str) Name of the entity requested. If None, will fall back to the default entity passed toApi
. If no default entity, will raise aValueError
.
Returns:
A Project
object.
method Api.projects
projects(
entity: Optional[str] = None,
per_page: Optional[int] = 200
) → public.Projects
Get projects for a given entity.
Args:
entity
: (str) Name of the entity requested. If None, will fall back to the default entity passed toApi
. If no default entity, will raise aValueError
.per_page
: (int) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this.
Returns:
A Projects
object which is an iterable collection of Project
objects.
method Api.queued_run
queued_run(
entity,
project,
queue_name,
run_queue_item_id,
project_queue=None,
priority=None
)
Return a single queued run based on the path.
Parses paths of the form entity/project/queue_id/run_queue_item_id.
method Api.reports
reports(
path: str = '',
name: Optional[str] = None,
per_page: Optional[int] = 50
) → public.Reports
Get reports for a given project path.
WARNING: This api is in beta and will likely change in a future release
Args:
path
: (str) path to project the report resides in, should be in the form: “entity/project”name
: (str, optional) optional name of the report requested.per_page
: (int) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this.
Returns:
A Reports
object which is an iterable collection of BetaReport
objects.
method Api.run
run(path='')
Return a single run by parsing path in the form entity/project/run_id.
Args:
path
: (str) path to run in the formentity/project/run_id
. Ifapi.entity
is set, this can be in the formproject/run_id
and ifapi.project
is set this can just be the run_id.
Returns:
A Run
object.
method Api.run_queue
run_queue(entity, name)
Return the named RunQueue
for entity.
To create a new RunQueue
, use wandb.Api().create_run_queue(...)
.
method Api.runs
runs(
path: Optional[str] = None,
filters: Optional[Dict[str, Any]] = None,
order: str = '+created_at',
per_page: int = 50,
include_sweeps: bool = True
)
Return a set of runs from a project that match the filters provided.
Fields you can filter by include:
createdAt
: The timestamp when the run was created. (in ISO 8601 format, e.g. “2023-01-01T12:00:00Z”)displayName
: The human-readable display name of the run. (e.g. “eager-fox-1”)duration
: The total runtime of the run in seconds.group
: The group name used to organize related runs together.host
: The hostname where the run was executed.jobType
: The type of job or purpose of the run.name
: The unique identifier of the run. (e.g. “a1b2cdef”)state
: The current state of the run.tags
: The tags associated with the run.username
: The username of the user who initiated the run
Additionally, you can filter by items in the run config or summary metrics. Such as config.experiment_name
, summary_metrics.loss
, etc.
For more complex filtering, you can use MongoDB query operators. For details, see: https://docs.mongodb.com/manual/reference/operator/query The following operations are supported:
$and
$or
$nor
$eq
$ne
$gt
$gte
$lt
$lte
$in
$nin
$exists
$regex
Examples:
Find runs in my_project where config.experiment_name has been set to “foo” api.runs( path="my_entity/my_project", filters={"config.experiment_name": "foo"}, )
Find runs in my_project where config.experiment_name has been set to “foo” or “bar” api.runs( path="my_entity/my_project", filters={ "$or": [ {"config.experiment_name": "foo"}, {"config.experiment_name": "bar"}, ] }, )
Find runs in my_project where config.experiment_name matches a regex (anchors are not supported) api.runs( path="my_entity/my_project", filters={"config.experiment_name": {"$regex": "b.*"}}, )
Find runs in my_project where the run name matches a regex (anchors are not supported) api.runs( path="my_entity/my_project", filters={"display_name": {"$regex": "^foo.*"}}, )
Find runs in my_project where config.experiment contains a nested field “category” with value “testing” api.runs( path="my_entity/my_project", filters={"config.experiment.category": "testing"}, )
Find runs in my_project with a loss value of 0.5 nested in a dictionary under model1 in the summary metrics api.runs( path="my_entity/my_project", filters={"summary_metrics.model1.loss": 0.5}, )
Find runs in my_project sorted by ascending loss api.runs(path="my_entity/my_project", order="+summary_metrics.loss")
Args:
path
: (str) path to project, should be in the form: “entity/project”filters
: (dict) queries for specific runs using the MongoDB query language. You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc.For example
:{"config.experiment_name": "foo"}
would find runs with a config entry of experiment name set to “foo”order
: (str) Order can becreated_at
,heartbeat_at
,config.*.value
, orsummary_metrics.*
. If you prepend order with a + order is ascending. If you prepend order with a - order is descending (default). The default order is run.created_at from oldest to newest.per_page
: (int) Sets the page size for query pagination.include_sweeps
: (bool) Whether to include the sweep runs in the results.
Returns:
A Runs
object, which is an iterable collection of Run
objects.
method Api.sweep
sweep(path='')
Return a sweep by parsing path in the form entity/project/sweep_id
.
Args:
path
: (str, optional) path to sweep in the form entity/project/sweep_id. Ifapi.entity
is set, this can be in the form project/sweep_id and ifapi.project
is set this can just be the sweep_id.
Returns:
A Sweep
object.
method Api.sync_tensorboard
sync_tensorboard(root_dir, run_id=None, project=None, entity=None)
Sync a local directory containing tfevent files to wandb.
method Api.team
team(team: str) → public.Team
Return the matching Team
with the given name.
Args:
team
: (str) The name of the team.
Returns:
A Team
object.
method Api.upsert_run_queue
upsert_run_queue(
name: str,
resource_config: dict,
resource_type: 'public.RunQueueResourceType',
entity: Optional[str] = None,
template_variables: Optional[dict] = None,
external_links: Optional[dict] = None,
prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None
)
Upsert a run queue (launch).
Args:
name
: (str) Name of the queue to createentity
: (str) Optional name of the entity to create the queue. If None, will use the configured or default entity.resource_config
: (dict) Optional default resource configuration to be used for the queue. Use handlebars (eg.{{var}}
) to specify template variables.resource_type
: (str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.template_variables
: (dict) A dictionary of template variable schemas to be used with the config. Expected format of: `{"var-name"
: {"schema"
: {"type"
: (“string”, “number”, or “integer”),"default"
: (optional value),"minimum"
: (optional minimum),"maximum"
: (optional maximum),"enum"
: […"(options)"] } } }`external_links
: (dict) Optional dictionary of external links to be used with the queue. Expected format of: `{"name"
: “url” }`prioritization_mode
: (str) Optional version of prioritization to use. Either “V0” or None
Returns:
The upserted RunQueue
.
Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors
method Api.user
user(username_or_email: str) → Optional[ForwardRef('public.User')]
Return a user from a username or email address.
Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer
.
Args:
username_or_email
: (str) The username or email address of the user
Returns:
A User
object or None if a user couldn’t be found
method Api.users
users(username_or_email: str) → List[ForwardRef('public.User')]
Return all users from a partial username or email address query.
Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer
.
Args:
username_or_email
: (str) The prefix or suffix of the user you want to find
Returns:
An array of User
objects
3 - Artifact
class Artifact
Flexible and lightweight building block for dataset and model versioning.
Construct an empty W&B Artifact. Populate an artifacts contents with methods that begin with add
. Once the artifact has all the desired files, you can call wandb.log_artifact()
to log it.
Args:
name
: A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with theuse_artifact
Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project.type
: The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types includedataset
ormodel
. Includemodel
within your type string if you want to link the artifact to the W&B Model Registry.description
: A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with theArtifact.description
attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App.metadata
: Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys.
Returns:
An Artifact
object.
method Artifact.__init__
__init__(
name: 'str',
type: 'str',
description: 'str | None' = None,
metadata: 'dict[str, Any] | None' = None,
incremental: 'bool' = False,
use_as: 'str | None' = None
) → None
property Artifact.aliases
List of one or more semantically-friendly references or identifying “nicknames” assigned to an artifact version.
Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.
property Artifact.collection
The collection this artifact was retrieved from.
A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.
property Artifact.commit_hash
The hash returned when this artifact was committed.
property Artifact.created_at
Timestamp when the artifact was created.
property Artifact.description
A description of the artifact.
property Artifact.digest
The logical digest of the artifact.
The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest
version, then log_artifact
is a no-op.
property Artifact.distributed_id
property Artifact.entity
The name of the entity of the secondary (portfolio) artifact collection.
property Artifact.file_count
The number of files (including references).
property Artifact.id
The artifact’s ID.
property Artifact.incremental
property Artifact.manifest
The artifact’s manifest.
The manifest lists all of its contents, and can’t be changed once the artifact has been logged.
property Artifact.metadata
User-defined artifact metadata.
Structured data associated with the artifact.
property Artifact.name
The artifact name and version in its secondary (portfolio) collection.
A string with the format {collection}:{alias}
. Before the artifact is saved, contains only the name since the version is not yet known.
property Artifact.project
The name of the project of the secondary (portfolio) artifact collection.
property Artifact.qualified_name
The entity/project/name of the secondary (portfolio) collection.
property Artifact.size
The total size of the artifact in bytes.
Includes any references tracked by this artifact.
property Artifact.source_collection
The artifact’s primary (sequence) collection.
property Artifact.source_entity
The name of the entity of the primary (sequence) artifact collection.
property Artifact.source_name
The artifact name and version in its primary (sequence) collection.
A string with the format {collection}:{alias}
. Before the artifact is saved, contains only the name since the version is not yet known.
property Artifact.source_project
The name of the project of the primary (sequence) artifact collection.
property Artifact.source_qualified_name
The entity/project/name of the primary (sequence) collection.
property Artifact.source_version
The artifact’s version in its primary (sequence) collection.
A string with the format v{number}
.
property Artifact.state
The status of the artifact. One of: “PENDING”, “COMMITTED”, or “DELETED”.
property Artifact.tags
List of one or more tags assigned to this artifact version.
property Artifact.ttl
The time-to-live (TTL) policy of an artifact.
Artifacts are deleted shortly after a TTL policy’s duration passes. If set to None
, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact.
Raises:
ArtifactNotLoggedError
: Unable to fetch inherited TTL if the artifact has not been logged or saved
property Artifact.type
The artifact’s type. Common types include dataset
or model
.
property Artifact.updated_at
The time when the artifact was last updated.
property Artifact.url
Constructs the URL of the artifact.
Returns:
str
: The URL of the artifact.
property Artifact.use_as
property Artifact.version
The artifact’s version in its secondary (portfolio) collection.
method Artifact.add
add(
obj: 'WBValue',
name: 'StrPath',
overwrite: 'bool' = False
) → ArtifactManifestEntry
Add wandb.WBValue obj
to the artifact.
Args:
obj
: The object to add. Currently support one of Bokeh, JoinedTable, PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D, Audio, Image, Video, Html, Object3Dname
: The path within the artifact to add the object.overwrite
: If True, overwrite existing objects with the same file path (if applicable).
Returns: The added manifest entry
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.add_dir
add_dir(
local_path: 'str',
name: 'str | None' = None,
skip_cache: 'bool | None' = False,
policy: "Literal['mutable', 'immutable'] | None" = 'mutable'
) → None
Add a local directory to the artifact.
Args:
local_path
: The path of the local directory.name
: The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’stype
. Defaults to the root of the artifact.skip_cache
: If set toTrue
, W&B will not copy/move files to the cache while uploadingpolicy
: “mutable” | “immutable”. By default, “mutable”"mutable"
: Create a temporary copy of the file to prevent corruption during upload."immutable"
: Disable protection, rely on the user not to delete or change the file.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.ValueError
: Policy must be “mutable” or “immutable”
method Artifact.add_file
add_file(
local_path: 'str',
name: 'str | None' = None,
is_tmp: 'bool | None' = False,
skip_cache: 'bool | None' = False,
policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
overwrite: 'bool' = False
) → ArtifactManifestEntry
Add a local file to the artifact.
Args:
local_path
: The path to the file being added.name
: The path within the artifact to use for the file being added. Defaults to the basename of the file.is_tmp
: If true, then the file is renamed deterministically to avoid collisions.skip_cache
: IfTrue
, W&B will not copy files to the cache after uploading.policy
: By default, set to “mutable”. If set to “mutable”, create a temporary copy of the file to prevent corruption during upload. If set to “immutable”, disable protection and rely on the user not to delete or change the file.overwrite
: IfTrue
, overwrite the file if it already exists.
Returns: The added manifest entry.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.ValueError
: Policy must be “mutable” or “immutable”
method Artifact.add_reference
add_reference(
uri: 'ArtifactManifestEntry | str',
name: 'StrPath | None' = None,
checksum: 'bool' = True,
max_objects: 'int | None' = None
) → Sequence[ArtifactManifestEntry]
Add a reference denoted by a URI to the artifact.
Unlike files or directories that you add to an artifact, references are not uploaded to W&B. For more information, see Track external files.
By default, the following schemes are supported:
- http(s): The size and digest of the file will be inferred by the
Content-Length
and theETag
response headers returned by the server. - s3: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
- gs: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
- https, domain matching
*.blob.core.windows.net
(Azure): The checksum and size are be pulled from the blob metadata. If storage account versioning is enabled, then the version ID is also tracked. - file: The checksum and size are pulled from the file system. This scheme is useful if you have an NFS share or other externally mounted volume containing files you wish to track but not necessarily upload.
For any other scheme, the digest is just a hash of the URI and the size is left blank.
Args:
uri
: The URI path of the reference to add. The URI path can be an object returned fromArtifact.get_entry
to store a reference to another artifact’s entry.name
: The path within the artifact to place the contents of this reference.checksum
: Whether or not to checksum the resource(s) located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend settingchecksum=False
when adding reference objects, in which case a new version will only be created if the reference URI changes.max_objects
: The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum.
Returns: The added manifest entries.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.checkout
checkout(root: 'str | None' = None) → str
Replace the specified root directory with the contents of the artifact.
WARNING: This will delete all files in root
that are not included in the artifact.
Args:
root
: The directory to replace with this artifact’s files.
Returns: The path of the checked out contents.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.delete
delete(delete_aliases: 'bool' = False) → None
Delete an artifact and its files.
If called on a linked artifact (i.e. a member of a portfolio collection): only the link is deleted, and the source artifact is unaffected.
Args:
delete_aliases
: If set toTrue
, deletes all aliases associated with the artifact. Otherwise, this raises an exception if the artifact has existing aliases. This parameter is ignored if the artifact is linked (i.e. a member of a portfolio collection).
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.download
download(
root: 'StrPath | None' = None,
allow_missing_references: 'bool' = False,
skip_cache: 'bool | None' = None,
path_prefix: 'StrPath | None' = None
) → FilePathStr
Download the contents of the artifact to the specified root directory.
Existing files located within root
are not modified. Explicitly delete root
before you call download
if you want the contents of root
to exactly match the artifact.
Args:
root
: The directory W&B stores the artifact’s files.allow_missing_references
: If set toTrue
, any invalid reference paths will be ignored while downloading referenced files.skip_cache
: If set toTrue
, the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory.path_prefix
: If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes).
Returns: The path to the downloaded contents.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.RuntimeError
: If the artifact is attempted to be downloaded in offline mode.
method Artifact.file
file(root: 'str | None' = None) → StrPath
Download a single file artifact to the directory you specify with root
.
Args:
root
: The root directory to store the file. Defaults to ‘./artifacts/self.name/’.
Returns: The full path of the downloaded file.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the artifact contains more than one file.
method Artifact.files
files(names: 'list[str] | None' = None, per_page: 'int' = 50) → ArtifactFiles
Iterate over all files stored in this artifact.
Args:
names
: The filename paths relative to the root of the artifact you wish to list.per_page
: The number of files to return per request.
Returns:
An iterator containing File
objects.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.finalize
finalize() → None
Finalize the artifact version.
You cannot modify an artifact version once it is finalized because the artifact is logged as a specific artifact version. Create a new artifact version to log more data to an artifact. An artifact is automatically finalized when you log the artifact with log_artifact
.
method Artifact.get
get(name: 'str') → WBValue | None
Get the WBValue object located at the artifact relative name
.
Args:
name
: The artifact relative name to retrieve.
Returns:
W&B object that can be logged with wandb.log()
and visualized in the W&B UI.
Raises:
ArtifactNotLoggedError
: if the artifact isn’t logged or the run is offline
method Artifact.get_added_local_path_name
get_added_local_path_name(local_path: 'str') → str | None
Get the artifact relative name of a file added by a local filesystem path.
Args:
local_path
: The local path to resolve into an artifact relative name.
Returns: The artifact relative name.
method Artifact.get_entry
get_entry(name: 'StrPath') → ArtifactManifestEntry
Get the entry with the given name.
Args:
name
: The artifact relative name to get
Returns:
A W&B
object.
Raises:
ArtifactNotLoggedError
: if the artifact isn’t logged or the run is offline.KeyError
: if the artifact doesn’t contain an entry with the given name.
method Artifact.get_path
get_path(name: 'StrPath') → ArtifactManifestEntry
Deprecated. Use get_entry(name)
.
method Artifact.is_draft
is_draft() → bool
Check if artifact is not saved.
Returns: Boolean. False
if artifact is saved. True
if artifact is not saved.
method Artifact.json_encode
json_encode() → dict[str, Any]
Returns the artifact encoded to the JSON format.
Returns:
A dict
with string
keys representing attributes of the artifact.
method Artifact.link
link(target_path: 'str', aliases: 'list[str] | None' = None) → None
Link this artifact to a portfolio (a promoted collection of artifacts).
Args:
target_path
: The path to the portfolio inside a project. The target path must adhere to one of the following schemas{portfolio}
,{project}/{portfolio}
or{entity}/{project}/{portfolio}
. To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, settarget_path
to the following schema{"model-registry"}/{Registered Model Name}
or{entity}/{"model-registry"}/{Registered Model Name}
.aliases
: A list of strings that uniquely identifies the artifact inside the specified portfolio.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.logged_by
logged_by() → Run | None
Get the W&B run that originally logged the artifact.
Returns: The name of the W&B run that originally logged the artifact.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.new_draft
new_draft() → Artifact
Create a new draft artifact with the same content as this committed artifact.
The artifact returned can be extended or modified and logged as a new version.
Returns:
An Artifact
object.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.new_file
new_file(
name: 'str',
mode: 'str' = 'x',
encoding: 'str | None' = None
) → Iterator[IO]
Open a new temporary file and add it to the artifact.
Args:
name
: The name of the new file to add to the artifact.mode
: The file access mode to use to open the new file.encoding
: The encoding used to open the new file.
Returns: A new file object that can be written to. Upon closing, the file will be automatically added to the artifact.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.remove
remove(item: 'StrPath | ArtifactManifestEntry') → None
Remove an item from the artifact.
Args:
item
: The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.FileNotFoundError
: If the item isn’t found in the artifact.
method Artifact.save
save(
project: 'str | None' = None,
settings: 'wandb.Settings | None' = None
) → None
Persist any changes made to the artifact.
If currently in a run, that run will log this artifact. If not currently in a run, a run of type “auto” is created to track this artifact.
Args:
project
: A project to use for the artifact in the case that a run is not already in context.settings
: A settings object to use when initializing an automatic run. Most commonly used in testing harness.
method Artifact.unlink
unlink() → None
Unlink this artifact if it is currently a member of a portfolio (a promoted collection of artifacts).
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the artifact is not linked, i.e. it is not a member of a portfolio collection.
method Artifact.used_by
used_by() → list[Run]
Get a list of the runs that have used this artifact.
Returns:
A list of Run
objects.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.verify
verify(root: 'str | None' = None) → None
Verify that the contents of an artifact match the manifest.
All files in the directory are checksummed and the checksums are then cross-referenced against the artifact’s manifest. References are not verified.
Args:
root
: The directory to verify. If None artifact will be downloaded to ‘./artifacts/self.name/’
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the verification fails.
method Artifact.wait
wait(timeout: 'int | None' = None) → Artifact
If needed, wait for this artifact to finish logging.
Args:
timeout
: The time, in seconds, to wait.
Returns:
An Artifact
object.
4 - controller
function controller
controller(
sweep_id_or_config: Optional[str, Dict] = None,
entity: Optional[str] = None,
project: Optional[str] = None
) → _WandbController
Public sweep controller constructor.
Usage: ```python import wandb
tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)
```
5 - define_metric
function wandb.define_metric
wandb.define_metric(
name: 'str',
step_metric: 'str | wandb_metric.Metric | None' = None,
step_sync: 'bool | None' = None,
hidden: 'bool | None' = None,
summary: 'str | None' = None,
goal: 'str | None' = None,
overwrite: 'bool | None' = None
) → wandb_metric.Metric
Customize metrics logged with wandb.log()
.
Args:
name
: The name of the metric to customize.step_metric
: The name of another metric to serve as the X-axis for this metric in automatically generated charts.step_sync
: Automatically insert the last value of step_metric intorun.log()
if it is not provided explicitly. Defaults to True if step_metric is specified.hidden
: Hide this metric from automatic plots.summary
: Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.goal
: Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.overwrite
: If false, then this call is merged with previousdefine_metric
calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.
Returns: An object that represents this call but can otherwise be discarded.
6 - Error
class Error
Base W&B Error.
method Error.__init__
__init__(message, context: Optional[dict] = None) → None
7 - finish
function finish
finish(exit_code: 'int | None' = None, quiet: 'bool | None' = None) → None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.
Run States:
- Running: Active run that is logging data and/or sending heartbeats.
- Crashed: Run that stopped sending heartbeats unexpectedly.
- Finished: Run completed successfully (
exit_code=0
) with all data synced. - Failed: Run completed with errors (
exit_code!=0
).
Args:
exit_code
: Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.quiet
: Deprecated. Configure logging verbosity usingwandb.Settings(quiet=...)
.
8 - init
function init
init(
entity: 'str | None' = None,
project: 'str | None' = None,
dir: 'StrPath | None' = None,
id: 'str | None' = None,
name: 'str | None' = None,
notes: 'str | None' = None,
tags: 'Sequence[str] | None' = None,
config: 'dict[str, Any] | str | None' = None,
config_exclude_keys: 'list[str] | None' = None,
config_include_keys: 'list[str] | None' = None,
allow_val_change: 'bool | None' = None,
group: 'str | None' = None,
job_type: 'str | None' = None,
mode: "Literal['online', 'offline', 'disabled'] | None" = None,
force: 'bool | None' = None,
anonymous: "Literal['never', 'allow', 'must'] | None" = None,
reinit: 'bool | None' = None,
resume: "bool | Literal['allow', 'never', 'must', 'auto'] | None" = None,
resume_from: 'str | None' = None,
fork_from: 'str | None' = None,
save_code: 'bool | None' = None,
tensorboard: 'bool | None' = None,
sync_tensorboard: 'bool | None' = None,
monitor_gym: 'bool | None' = None,
settings: 'Settings | dict[str, Any] | None' = None
) → Run
Start a new run to track and log to W&B.
In an ML training pipeline, you could add wandb.init()
to the beginning of your training script as well as your evaluation script, and each piece would be tracked as a run in W&B.
wandb.init()
spawns a new background process to log data to a run, and it also syncs data to https://wandb.ai by default, so you can see your results in real-time.
Call wandb.init()
to start a run before logging data with wandb.log()
. When you’re done logging data, call wandb.finish()
to end the run. If you don’t call wandb.finish()
, the run will end when your script exits.
For more on using wandb.init()
, including detailed examples, check out our guide and FAQs.
Examples:
Explicitly set the entity and project and choose a name for the run:
import wandb
run = wandb.init(
entity="geoff",
project="capsules",
name="experiment-2021-10-31",
)
# ... your training code here ...
run.finish()
```
### Add metadata about the run using the `config` argument:
```python
import wandb
config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
run.config.update({"architecture": "resnet", "depth": 34})
# ... your training code here ...
```
Note that you can use `wandb.init()` as a context manager to automatically call `wandb.finish()` at the end of the block.
**Args:**
- `entity`: The username or team name under which the runs will be logged. The entity must already exist, so ensure you’ve created your account or team in the UI before starting to log runs. If not specified, the run will default your default entity. To change the default entity,
- `go to [your settings](https`: //wandb.ai/settings) and update the "Default location to create new projects" under "Default team".
- `project`: The name of the project under which this run will be logged. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can't infer the project name, the project will default to `"uncategorized"`.
- `dir`: The absolute path to the directory where experiment logs and metadata files are stored. If not specified, this defaults to the `./wandb` directory. Note that this does not affect the location where artifacts are stored when calling `download()`.
- `id`: A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. The identifier must not contain any of the following special characters:
- ``/ \ # ? % `: `. For a short descriptive name, use the `name` field, or for saving hyperparameters to compare across runs, use `config`.
- `name`: A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using the `config` field.
- `notes`: A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future.
- `tags`: A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like "baseline" or "production." You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, use `run.tags += ["new_tag"]` after calling `run = wandb.init()`.
- `config`: Sets `wandb.config`, a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (`.`), and values should be smaller than 10 MB. If a dictionary, `argparse.Namespace`, or `absl.flags.FLAGS` is provided, the key-value pairs will be loaded directly into `wandb.config`. If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded into `wandb.config`.
- `config_exclude_keys`: A list of specific keys to exclude from `wandb.config`.
- `config_include_keys`: A list of specific keys to include in `wandb.config`.
- `allow_val_change`: Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider using `wandb.log()` instead. By default, this is `False` in scripts and `True` in Notebook environments.
- `group`: Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment. For more information, refer to our
- `[guide to grouping runs](https`: //docs.wandb.com/guides/runs/grouping).
- `job_type`: Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as "train" and "eval". Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons.
- `mode`: Specifies how run data is managed, with the following options:
- `"online"` (default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations.
- `"offline"`: Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing.
- `"disabled"`: Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations.
- `force`: Determines if a W&B login is required to run the script. If `True`, the user must be logged in to W&B; otherwise, the script will not proceed. If `False` (default), the script can proceed without a login, switching to offline mode if the user is not logged in.
- `anonymous`: Specifies the level of control over anonymous data logging. Available options are:
- `"never"` (default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account.
- `"allow"`: Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI.
- `"must"`: Forces the run to be logged to an anonymous account, even if the user is logged in.
- `reinit`: Determines if multiple `wandb.init()` calls can start new runs within the same process. By default (`False`), if an active run exists, calling `wandb.init()` returns the existing run instead of creating a new one. When `reinit=True`, the active run is finished before a new run is initialized. In notebook environments, runs are reinitialized by default unless `reinit` is explicitly set to `False`.
- `resume`: Controls the behavior when resuming a run with the specified `id`. Available options are:
- `"allow"`: If a run with the specified `id` exists, it will resume from the last step; otherwise, a new run will be created.
- `"never"`: If a run with the specified `id` exists, an error will be raised. If no such run is found, a new run will be created.
- `"must"`: If a run with the specified `id` exists, it will resume from the last step. If no run is found, an error will be raised.
- `"auto"`: Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run.
- `True`: Deprecated. Use `"auto"` instead.
- `False`: Deprecated. Use the default behavior (leaving `resume` unset) to always start a new run.
- `Note`: If `resume` is set, `fork_from` and `resume_from` cannot be used. When `resume` is unset, the system will always start a new run. For more details, see our
- `[guide to resuming runs](https`: //docs.wandb.com/guides/runs/resuming).
- `resume_from`: Specifies a moment in a previous run to resume a run from, using the format `{run_id}?_step={step}`. This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If an `id` argument is also provided, the `resume_from` argument will take precedence. `resume`, `resume_from` and `fork_from` cannot be used together, only one of them can be used at a time.
- `Note`: This feature is in beta and may change in the future.
- `fork_from`: Specifies a point in a previous run from which to fork a new run, using the format `{id}?_step={step}`. This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If an `id` argument is also provided, it must be different from the `fork_from` argument, an error will be raised if they are the same. `resume`, `resume_from` and `fork_from` cannot be used together, only one of them can be used at a time.
- `Note`: This feature is in beta and may change in the future.
- `save_code`: Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to
- `enable on your [settings page](https`: //wandb.ai/settings).
- `tensorboard`: Deprecated. Use `sync_tensorboard` instead.
- `sync_tensorboard`: Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI.
- `saving relevant event files for viewing in the W&B UI. (Default`: `False`)
- `monitor_gym`: Enables automatic logging of videos of the environment when using OpenAI Gym. For additional details, see our
- `[guide for gym integration](https`: //docs.wandb.com/guides/integrations/openai-gym).
- `settings`: Specifies a dictionary or `wandb.Settings` object with advanced settings for the run.
**Returns:**
A `Run` object, which is a handle to the current run. Use this object to perform operations like logging data, saving files, and finishing
- `the run. See the [Run API](https`: //docs.wandb.ai/ref/python/run) for more details.
**Raises:**
- `Error`: If some unknown or internal error happened during the run initialization.
- `AuthenticationError`: If the user failed to provide valid credentials.
- `CommError`: If there was a problem communicating with the W&B server.
- `UsageError`: If the user provided invalid arguments to the function.
- `KeyboardInterrupt`: If the user interrupts the run initialization process. If the user interrupts the run initialization process.
9 - link_model
function wandb.link_model
wandb.link_model(
path: 'StrPath',
registered_model_name: 'str',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → None
Log a model artifact version and link it to a registered model in the model registry.
The linked model version will be visible in the UI for the specified registered model.
Steps: - Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’ of type ‘model’. - Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project. If not, create a new registered model with name ‘registered_model_name’. - Link version of model artifact ’name’ to registered model, ‘registered_model_name’. - Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args:
path
: (str) A path to the contents of this model, can be in the following forms: -/local/directory
-/local/directory/file.txt
-s3://bucket/path
registered_model_name
: (str) - the name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the runname
: (str, optional) - the name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.aliases
: (List[str], optional) - alias(es) that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Examples:
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
```
Invalid usage ```python
run.link_model(
path="/local/directory",
registered_model_name="my_entity/my_project/my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
```
**Raises:**
- `AssertionError`: if registered_model_name is a path or if model artifact 'name' is of a type that does not contain the substring 'model'
- `ValueError`: if name has invalid special characters
**Returns:**
None
10 - log
function wandb.log
wandb.log(
data: 'dict[str, Any]',
step: 'int | None' = None,
commit: 'bool | None' = None,
sync: 'bool | None' = None
) → None
Upload run data.
Use log
to log data from runs, such as scalars, images, video, histograms, plots, and tables.
See our guides to logging for live examples, code snippets, best practices, and more.
The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9})
. This will save the loss and accuracy to the run’s history and update the summary values for these metrics.
Visualize logged data in the workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, e.g. in Jupyter notebooks, with our API.
Logged values don’t have to be scalars. Logging any wandb object is supported. For example run.log({"example": wandb.Image("myimage.jpg")})
will log an example image which will be displayed nicely in the W&B UI. See the reference documentation for all of the different supported types or check out our guides to logging for examples, from 3D molecular structures and segmentation masks to PR curves and histograms. You can use wandb.Table
to log structured data. See our guide to logging tables for details.
The W&B UI organizes metrics with a forward slash (/
) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:
run.log(
{
"train/accuracy": 0.9,
"train/loss": 30,
"validate/accuracy": 0.8,
"validate/loss": 20,
}
)
Only one level of nesting is supported; run.log({"a/b/c": 1})
produces a section named “a/b”.
run.log
is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.
The W&B step
With basic usage, each call to log
creates a new “step”. The step must always increase, and it is not possible to log to a previous step.
Note that you can use any metric as the X axis in charts. In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
See also define_metric.
It is possible to use multiple log
invocations to log to the same step with the step
and commit
parameters. The following are all equivalent:
# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})
# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})
# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args:
data
: Adict
withstr
keys and values that are serializablePython objects including
:int
,float
andstring
; any of thewandb.data_types
; lists, tuples and NumPy arrays of serializable Python objects; otherdict
s of this structure.step
: The step number to log. IfNone
, then an implicit auto-incrementing step is used. See the notes in the description.commit
: If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. Ifstep
isNone
, then the default iscommit=True
; otherwise, the default iscommit=False
.sync
: This argument is deprecated and does nothing.
Examples: For more and more detailed examples, see our guides to logging.
Basic usage ```python
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
```
Incremental logging ```python
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
```
Histogram ```python
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
```
Image from numpy ```python
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
```
Image from PIL ```python
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0,
high=256,
size=(100, 100, 3),
dtype=np.uint8,
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
```
Video from numpy ```python
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0,
high=256,
size=(10, 3, 100, 100),
dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})
```
Matplotlib Plot ```python
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
```
PR Curve ```python
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
```
3D Object ```python
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
```
Raises:
wandb.Error
: if called beforewandb.init
ValueError
: if invalid data is passed
11 - log_artifact
function wandb.log_artifact
wandb.log_artifact(
artifact_or_path: 'Artifact | StrPath',
name: 'str | None' = None,
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
tags: 'list[str] | None' = None
) → Artifact
Declare an artifact as an output of a run.
Args:
artifact_or_path
: (str or Artifact) A path to the contents of this artifact, can be in the following forms: -/local/directory
-/local/directory/file.txt
-s3://bucket/path
You can also pass an Artifact object created by callingwandb.Artifact
.name
: (str, optional) An artifact name. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.type
: (str) The type of artifact to log, examples includedataset
,model
aliases
: (list, optional) Aliases to apply to this artifact, defaults to["latest"]
tags
: (list, optional) Tags to apply to this artifact, if any.
Returns:
An Artifact
object.
12 - log_model
function wandb.log_model
wandb.log_model(
path: 'StrPath',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → None
Logs a model artifact containing the contents inside the ‘path’ to a run and marks it as an output to this run.
Args:
path
: (str) A path to the contents of this model, can be in the following forms: -/local/directory
-/local/directory/file.txt
-s3://bucket/path
name
: (str, optional) A name to assign to the model artifact that the file contents will be added to.The string must contain only the following alphanumeric characters
: dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.aliases
: (list, optional) Aliases to apply to the created model artifact, defaults to["latest"]
Examples:
run.log_model(
path="/local/directory",
name="my_model_artifact",
aliases=["production"],
)
```
Invalid usage ```python
run.log_model(
path="/local/directory",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
```
**Raises:**
- `ValueError`: if name has invalid special characters
**Returns:**
None
13 - login
function login
login(
anonymous: Optional[Literal['must', 'allow', 'never']] = None,
key: Optional[str] = None,
relogin: Optional[bool] = None,
host: Optional[str] = None,
force: Optional[bool] = None,
timeout: Optional[int] = None,
verify: bool = False
) → bool
Set up W&B login credentials.
By default, this will only store credentials locally without verifying them with the W&B server. To verify credentials, pass verify=True
.
Args:
anonymous
: (string, optional) Can be “must”, “allow”, or “never”. If set to “must”, always log a user in anonymously. If set to “allow”, only create an anonymous user if the user isn’t already logged in. If set to “never”, never log a user anonymously. Default set to “never”.key
: (string, optional) The API key to use.relogin
: (bool, optional) If true, will re-prompt for API key.host
: (string, optional) The host to connect to.force
: (bool, optional) If true, will force a relogin.timeout
: (int, optional) Number of seconds to wait for user input.verify
: (bool) Verify the credentials with the W&B server.
Returns:
bool
: if key is configured
Raises: AuthenticationError - if api_key fails verification with the server UsageError - if api_key cannot be configured and no tty
14 - plot_table
function plot_table
plot_table(
vega_spec_name: 'str',
data_table: 'wandb.Table',
fields: 'dict[str, Any]',
string_fields: 'dict[str, Any] | None' = None,
split_table: 'bool' = False
) → CustomChart
Creates a custom charts using a Vega-Lite specification and a wandb.Table
.
This function creates a custom chart based on a Vega-Lite specification and a data table represented by a wandb.Table
object. The specification needs to be predefined and stored in the W&B backend. The function returns a custom chart object that can be logged to W&B using wandb.log()
.
Args:
vega_spec_name
(str): The name or identifier of the Vega-Lite spec that defines the visualization structure.data_table
(wandb.Table): Awandb.Table
object containing the data to be visualized.fields
(dict[str, Any]): A mapping between the fields in the Vega-Lite spec and the corresponding columns in the data table to be visualized.string_fields
(dict[str, Any] | None): A dictionary for providing values for any string constants required by the custom visualization.split_table
(bool): Whether the table should be split into a separate section in the W&B UI. IfTrue
, the table will be displayed in a section named “Custom Chart Tables”. Default isFalse
.
Returns:
CustomChart
: A custom chart object that can be logged to W&B. To log the chart, pass it towandb.log()
.
Raises:
wandb.Error
: Ifdata_table
is not awandb.Table
object.
15 - save
function wandb.save
wandb.save(
glob_str: 'str | os.PathLike | None' = None,
base_path: 'str | os.PathLike | None' = None,
policy: 'PolicyName' = 'live'
) → bool | list[str]
Sync one or more files to W&B.
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save
is called regardless of the policy
. In particular, new files are not picked up automatically.
A base_path
may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str
, and the directory structure beneath it is preserved. It’s best understood through
examples:
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
Note: when given an absolute path or glob and no base_path
, one directory level is preserved as in the example above.
Args:
glob_str
: A relative or absolute path or Unix glob.base_path
: A path to use to infer a directory structure; see examples.policy
: One oflive
,now
, orend
.* live
: upload the file as it changes, overwriting the previous version* now
: upload the file once now* end
: upload file when the run ends
Returns: Paths to the symlinks created for the matched files.
For historical reasons, this may return a boolean in legacy code.
16 - setup
function setup
setup(settings: 'Settings | None' = None) → _WandbSetup
Prepares W&B for use in the current process and its children.
You can usually ignore this as it is implicitly called by wandb.init()
.
When using wandb in multiple processes, calling wandb.setup()
in the parent process before starting child processes may improve performance and resource utilization.
Note that wandb.setup()
modifies os.environ
, and it is important that child processes inherit the modified environment variables.
See also wandb.teardown()
.
Args:
settings
: Configuration settings to apply globally. These can be overridden by subsequentwandb.init()
calls.
Example:
import multiprocessing
import wandb
def run_experiment(params):
with wandb.init(config=params):
# Run experiment
pass
if __name__ == "__main__":
# Start backend and set global config
wandb.setup(settings={"project": "my_project"})
# Define experiment parameters
experiment_params = [
{"learning_rate": 0.01, "epochs": 10},
{"learning_rate": 0.001, "epochs": 20},
]
# Start multiple processes, each running a separate experiment
processes = []
for params in experiment_params:
p = multiprocessing.Process(target=run_experiment, args=(params,))
p.start()
processes.append(p)
# Wait for all processes to complete
for p in processes:
p.join()
# Optional: Explicitly shut down the backend
wandb.teardown()
```
17 - sweep
function sweep
sweep(
sweep: Union[dict, Callable],
entity: Optional[str] = None,
project: Optional[str] = None,
prior_runs: Optional[List[str]] = None
) → str
Initialize a hyperparameter sweep.
Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.
Make note the unique identifier, sweep_id
, that is returned. At a later step provide the sweep_id
to a sweep agent.
Args:
sweep
: The configuration of a hyperparameter search. (or configuration generator). See[Sweep configuration structure](https
: //docs.wandb.ai/guides/sweeps/define-sweep-configuration) for information on how to define your sweep. If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec.entity
: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.project
: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.prior_runs
: The run IDs of existing runs to add to this sweep.
Returns:
sweep_id
: str. A unique identifier for the sweep.
18 - teardown
function teardown
teardown(exit_code: 'int | None' = None) → None
Waits for wandb to finish and frees resources.
Completes any runs that were not explicitly finished using run.finish()
and waits for all data to be uploaded.
It is recommended to call this at the end of a session that used wandb.setup()
. It is invoked automatically in an atexit
hook, but this is not reliable in certain setups such as when using Python’s multiprocessing
module.
19 - termerror
function termerror
termerror(
string: 'str',
newline: 'bool' = True,
repeat: 'bool' = True,
prefix: 'bool' = True
) → None
Log an error to stderr.
The arguments are the same as for termlog()
.
20 - termlog
function termlog
termlog(
string: 'str' = '',
newline: 'bool' = True,
repeat: 'bool' = True,
prefix: 'bool' = True
) → None
Log an informational message to stderr.
The message may contain ANSI color sequences and the \n character. Colors are stripped if stderr is not a TTY.
Args:
string
: The message to display.newline
: Whether to add a newline to the end of the string.repeat
: If false, then the string is not printed if an exact match has already been printed through any of the other logging functions in this file.prefix
: Whether to include the ‘wandb:’ prefix.
21 - termsetup
function termsetup
termsetup(
settings: 'wandb.Settings',
logger: 'SupportsLeveledLogging | None'
) → None
Configure the global logging functions.
Args:
settings
: The settings object passed to wandb.setup() or wandb.init().logger
: A fallback logger to use for “silent” mode. In this mode, the logger is used instead of printing to stderr.
22 - termwarn
function termwarn
termwarn(
string: 'str',
newline: 'bool' = True,
repeat: 'bool' = True,
prefix: 'bool' = True
) → None
Log a warning to stderr.
The arguments are the same as for termlog()
.
23 - unwatch
function wandb.unwatch
wandb.unwatch(
models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
) → None
Remove pytorch model topology, gradient and parameter hooks.
Args: models (torch.nn.Module | Sequence[torch.nn.Module]): Optional list of pytorch models that have had watch called on them
24 - use_artifact
function wandb.use_artifact
wandb.use_artifact(
artifact_or_name: 'str | Artifact',
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
use_as: 'str | None' = None
) → Artifact
Declare an artifact as an input to a run.
Call download
or file
on the returned object to get the contents locally.
Args:
artifact_or_name
: (str or Artifact) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: - name:version - name:alias You can also pass an Artifact object created by callingwandb.Artifact
type
: (str, optional) The type of artifact to use.aliases
: (list, optional) Aliases to apply to this artifactuse_as
: (string, optional) Optional string indicating what purpose the artifact was used with. Will be shown in UI.
Returns:
An Artifact
object.
25 - use_model
function wandb.use_model
wandb.use_model(name: 'str') → FilePathStr
Download the files logged in a model artifact ’name'.
Args:
name
: (str) A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms: - model_artifact_name:version - model_artifact_name:alias
Examples:
run.use_model(
name="my_model_artifact:latest",
)
run.use_model(
name="my_project/my_model_artifact:v0",
)
run.use_model(
name="my_entity/my_project/my_model_artifact:<digest>",
)
```
Invalid usage ```python
run.use_model(
name="my_entity/my_project/my_model_artifact",
)
```
**Raises:**
- `AssertionError`: if model artifact 'name' is of a type that does not contain the substring 'model'.
**Returns:**
- `path`: (str) path to downloaded model artifact file(s).
26 - watch
function wandb.watch
wandb.watch(
models: 'torch.nn.Module | Sequence[torch.nn.Module]',
criterion: 'torch.F | None' = None,
log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
log_freq: 'int' = 1000,
idx: 'int | None' = None,
log_graph: 'bool' = False
) → None
Hooks into the given PyTorch model(s) to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training. It should be extended to support arbitrary machine learning models in the future.
Args:
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]): A single model or a sequence of models to be monitored. criterion (Optional[torch.F]): The loss function being optimized (optional). log (Optional[Literal[“gradients”, “parameters”, “all”]]): Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”) log_freq (int): Frequency (in batches) to log gradients and parameters. (default=1000) idx (Optional[int]): Index used when tracking multiple models with wandb.watch
. (default=None) log_graph (bool): Whether to log the model’s computational graph. (default=False)
Raises:
ValueError: If wandb.init
has not been called or if any of the models are not instances of torch.nn.Module
.