All notable changes to the Nucleus Python Client will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
0.17.13 - 2026-03-06
- Removed the deprecated
pkg_resourcespackage and replaced it withimportlib-metadata - Resolved ~79 errors/warnings in sphinx auto doc build errors
0.17.12 - 2026-02-23
Dataset.deduplicate()method to deduplicate images using perceptual hashing. Accepts optionalreference_idsto deduplicate specific items, or deduplicates the entire dataset when onlythresholdis provided. Requiredthresholdparameter (0-64) controls similarity matching (lower = stricter, 0 = exact matches only).Dataset.deduplicate_by_ids()method for deduplication using internaldataset_item_idsdirectly, avoiding the reference ID to item ID mapping for improved efficiency.DeduplicationResultandDeduplicationStatsdataclasses for structured deduplication results.
Example usage:
dataset = client.get_dataset("ds_...")
# Deduplicate entire dataset
result = dataset.deduplicate(threshold=10)
# Deduplicate specific items by reference IDs
result = dataset.deduplicate(threshold=10, reference_ids=["ref_1", "ref_2", "ref_3"])
# Deduplicate by internal item IDs (more efficient if you have them)
result = dataset.deduplicate_by_ids(threshold=10, dataset_item_ids=["item_1", "item_2"])
# Access results
print(f"Threshold: {result.stats.threshold}")
print(f"Original: {result.stats.original_count}, Unique: {result.stats.deduplicated_count}")
print(result.unique_reference_ids)0.17.11 - 2025-11-03
- Support passing a limited access key via
NucleusClient(limited_access_key=...). When provided, the client sends thex-limited-access-keyheader on all requests (sync and async). - Allow using the SDK without a standard API key when a
limited_access_keyis supplied. In this mode, Basic Auth is omitted and only the limited access header is used.
Example usage:
client = nucleus.NucleusClient(limited_access_key="<LIMITED_ACCESS_KEY>")
#...Connectionacceptsextra_headersand only includes Basic Auth whenapi_keyis provided. This enables header-only auth with limited access keys.- Header propagation applies across all request paths, including Validate endpoints and concurrent async helpers.
- Tests updated to be tolerant of limited-access-only runs.
- NoAPIKey error messaging updated to account for limited_access_key support.
0.17.10 - 2025-03-19
- Adding page size variable to
items_and_annotation_generator()to reduce timeout errors for customers with large datasets
0.17.9 - 2025-03-11
- Adding
export_class_labelsmethods to datasets and slices to extract unique class labels of the annotations in the dataset/slice.
0.17.8 - 2025-01-02
- Adding
only_most_recent_tasksparameter fordataset.scene_and_annotation_generator()anddataset.items_and_annotation_generator()to accommodate for multiple sets of ground truth caused by relabeled tasks. Also returns the task_id in the annotation results.
0.17.7 - 2024-11-05
- Adding
slice_idparameter fordataset.scene_and_annotation_generator().
Example usage:
dataset = client.get_dataset("ds_...")
for scene in dataset.scene_and_annotation_generator(slice_id="slc_..."):
#...0.17.6 - 2024-07-03
- Method for downloading all annotations grouped by
sceneandtrack_reference_id.
Example usage:
dataset = client.get_dataset("ds_...")
for scene in dataset.scene_and_annotation_generator():
#...0.17.5 - 2024-04-15
- Method for uploading lidar semantic segmentation predictions, via
dataset.upload_lidar_semseg_predictions
Example usage:
dataset = client.get_dataset("ds_...")
model = client.get_model("prj_...")
pointcloud_ref_id = 'pc_ref_1'
predictions_s3 = "s3://temp/predictions.json"
dataset.upload_lidar_semseg_predictions(model, pointcloud_ref_id, predictions_s3)For the expected format of the s3 predictions, refer to the documentation here
0.17.4 - 2024-03-25
- In
Model.run, added themodel_run_nameparameter. This allows the creation of multiple model runs for datasets.
- Added the environment variable
S3_ENDPOINTto accomodate for nonstandard S3 Endpoint URLs when asking for presigned URLs
0.17.2 - 2024-02-28
- In
Dataset.create_slice, thereference_idsparameter is now optional. If left unspecified, it will create an empty slice
0.17.1 - 2024-02-22
- Environment variable
NUCLEUS_SKIP_SSL_VERIFYto skip SSL verification on requests
0.17.0 - 2024-02-06
- Added
dataset.add_items_from_dir - Added pytest-xdist for test parallelization
- Fix test
test_models.test_remove_invalid_tag_from_model
0.16.18 - 2024-02-06
- Add the ability to add and remove
trained_slice_idto a model
0.16.17 - 2024-01-29
- Update documentation
0.16.16 - 2024-01-25
- Minor fixes to docstring
0.16.15 - 2024-01-11
- Fix lidar concurrent lidar pointcloud to also return intensity in case it exists in the response.
0.16.14 - 2024-01-03
- Open up Pydantic version requirements as was fixed in 0.16.11
0.16.13 - 2023-12-13
- Added
trained_slice_idparameter todataset.upload_predictions()to specify the slice ID used to train the model.
- Fix offset generation for image chips in
dataset.items_and_annotation_chip_generator()
0.16.12 - 2023-11-29
- Added tag support for slices.
Example:
>>> slc = client.get_slice('slc_id')
>>> tags = slc.tags
>>> slc.add_tags(['new_tag_1', 'new_tag_2'])0.16.11 - 2023-11-22
- Added
num_processesparameter todataset.items_and_annotation_chip_generator()to specify parallel processing. - Method to allow for concurrent task fetches for pointcloud data
Example:
>>> task_ids = ['task_1', 'task_2']
>>> resp = client.download_pointcloud_tasks(task_ids=task_ids, frame_num=1)
>>> resp
{
'task_1': [Point3D(x=5, y=10.7, z=-2.3), ...],
'task_2': [Point3D(x=1.3 y=11.1, z=1.5), ...],
}- Support environments using pydantic>=2
0.16.10 - 2023-11-22
Allow creating a dataset by crawling all images in a directory, recursively. Also supports privacy mode datasets.
~/Documents/
data/
2022/
- img01.png
- img02.png
2023/
- img01.png
- img02.png
data_dir = "~/Documents/data"
client.create_dataset_from_dir(data_dir)
# this will create a dataset named "data" and will contain 4 images, with the ref IDs:
# ["2022/img01.png", "2022/img02.png", "2023/img01.png", "2023/img02.png"]This requires that a proxy (or file server) is setup and can serve files relative to the data_dir
data_dir = "~/Documents/data"
client.create_dataset_from_dir(
data_dir,
dataset_name='my-dataset',
use_privacy_mode=True,
privacy_mode_proxy="http://localhost:5000/assets/"
)This would create a dataset my-dataset, and when opened in Nucleus, the images would be requested to the path:
<privacy_mode_proxy>/<img ref id>, for example: http://localhost:5000/assets/2022/img01.png
0.16.9 - 2023-11-17
- Minor fixes to video scene upload on privacy mode
0.16.8 - 2023-11-16
- Allow passing width and height to
DatasetItem - This is required when using privacy mode
- Added
dataset.items_and_annotation_chip_generator()functionality to generate chips of images in s3 or locally. - Added
queryparameter fordataset.items_and_annotation_generator()to filter dataset items.
upload_to_scaleis no longer a property inDatasetItem, users should instead specifyuse_privacy_modeon the dataset during creation
0.16.7 - 2023-11-03
- Allow direct embedding vector upload together with dataset items.
DatasetItemnow has an additional parameter calledembedding_infowhich can be used to directly upload embeddings when a dataset is uploaded. - Added
dataset.embedding_indexesproperty, which exposes information about every embedding index which belongs to the dataset.
0.16.6 - 2023-11-01
- Allow datasets to be created in "privacy mode". For example,
client.create_dataset('name', use_privacy_mode=True). - Privacy Mode lets customers use Nucleus without sensitive raw data ever leaving their servers.
- When set to
True, you can submit URLs to Nucleus that link to raw data assets like images or point clouds, instead of transferring that data to Scale. Access control is then completely in the hands of users: URLs may optionally be protected behind your corporate VPN or an IP whitelist. When you load a Nucleus web page, your browser will directly fetch the raw data from your servers without it ever being accessible to Scale.
0.16.5 - 2023-10-30
- Added a
descriptionto the slice info.
- Made
skeletonkey optional onKeypointsAnnotation.
0.16.4 - 2023-10-23
- Added a
query_objectsmethod on the Dataset class. - Example
>>> ds = client.get_dataset('ds_id')
>>> objects = ds.query_objects('annotations.metadata.distance_to_device > 150', ObjectQueryType.GROUND_TRUTH_ONLY)
[CuboidAnnotation(label="", dimensions={}, ...), ...]- Added
EvaluationMatchclass to represent IOU Matches, False Positives and False Negatives retrieved through thequery_objectsmethod
0.16.3 - 2023-10-10
- Added a
query_scenesmethod on the Dataset class. - Example
>>> ds = client.get_dataset('ds_id')
>>> scenes = ds.query_scenes('scene.metadata.foo = "baz"')
[Scene(reference_id="", metadata={}, ...), ...]0.16.2 - 2023-10-03
- Raise error on all error states for AsyncJob.sleep_until_complete(). Before it only handled the deprecated "Errored"
0.16.1 - 2023-09-18
- Added
asynchronousparameter forslice.export_embeddings()anddataset.export_embeddings()to allow embeddings to be exported asynchronously.
- Changed
slice.export_embeddings()anddataset.export_embeddings()to be asynchronous by deafult.
0.16.0 - 2023-09-18
- Support for Python 3.6 - it is end of life for more than a year
- Development environment for Python 3.11
0.15.11 - 2023-09-15
- Added
slice.export_raw_json()functionality to support raw export of object slices (annotations, predictions, item and scene level data). Currently does not support image slices.
0.15.10 - 2023-07-20
- Fix
slice.export_predictions(args)andslice.export_predictions_generator(args)methods to returnPredictionsinstead ofAnnotations
0.15.9 - 2023-06-26
- Support for Scale Launch client v1.0.0 and higher for the Nucleus + Launch integration
0.15.7 - 2023-06-09
- Allow for downloading pointcloud data for a give task and frame number, example:
import nucleus
import numpy as np
client = nucleus.NucleusClient(API_KEY)
pts = client.download_pointcloud_task(task_id, frame_num=1)
np_pts = np.array([pt.to_list() for pt in pts])0.15.6 - 2023-06-03
- Document new restrictions to slice create/append.
Dataset.create_sliceandSlice.appendmethods cannot exceed 10,000 items per request.
0.15.5 - 2023-05-8
- Give default annotation_id to
KeypointAnnotationswhen not specified
0.15.4 - 2023-03-21
- Added
create_slice_by_idsto create slices from dataset item, scene, and object IDs
0.15.3 - 2023-03-02
- Allow denormalized scores in
EvaluationResults
0.15.2 - 2023-02-10
- Fix
client.create_launch_model_from_dir(args)method
0.15.1 - 2023-01-16
- Better filter tuning of
client.list_jobs(args)method
- Dataset method to filter jobs, and statistics on running jobs Example:
>>> client = nucleus.NucleusClient(API_KEY)
>>> ds = client.get_dataset(ds_id)
>>> ds.jobs(show_completed=True, stats_only=True)
{'autotagInference': {'Cancelled': 1, 'Completed': 11},
'modelRunCommit': {'Completed': 7, 'Errored_Server': 1, 'Running': 1},
'sliceQuery': {'Completed': 40, 'Running': 2}}Detailed Example
>>> from nucleus.job import CustomerJobTypes
>>> client = nucleus.NucleusClient(API_KEY)
>>> ds = client.get_dataset(ds_id)
>>> from_date = "2022-12-20"; to_date = "2023-01-15"
>>> job_types = [CustomerJobTypes.MODEL_INFERENCE_RUN, CustomerJobTypes.UPLOAD_DATASET_ITEMS]
>>> ds.jobs(
from_date=from_date,
to_date=to_date,
show_completed=True,
job_types=job_types,
limit=150
)
# ... returns list of AsyncJob objects0.15.0 - 2022-12-19
dataset.slicesnow returns a list ofSliceobjects instead of a list of IDs
Retrieve a slice from a dataset by its name, or all slices of a particular type from a dataset. Where type is one of ["dataset_item", "object", "scene"].
dataset.get_slices(name, slice_type): List[Slice]
from nucleus.slice import SliceType
dataset.get_slices(name="My Slice")
dataset.get_slices(slice_type=SliceType.DATASET_ITEM)0.14.30 - 2022-11-29
- Support for uploading track-level metrics to external evaluation functions using track_ref_ids
0.14.29 - 2022-11-22
- Support for
Tracks, enabling ground truth annotations and model predictions to be grouped across dataset items and scenes - Helpers to update track metadata, as well as to create and delete tracks at the dataset level
0.14.28 - 2022-11-17
- Support for appending to slice with scene reference IDs
- Better error handling when appending to a slice with non-existent reference IDs
0.14.27 - 2022-11-04
- Support for scene-level external evaluation functions
- Support for uploading custom scene-level metrics
0.14.26 - 2022-11-01
- Support for fetching scene from a
DatasetItem.reference_idExample:
dataset = client.get_dataset("<dataset_id>")
assert dataset.is_scene # only works on scene datasets
some_item = dataset.iloc(0)
dataset.get_scene_from_item_ref_id(some_item['item'].reference_id) 0.14.25 - 2022-10-20
- Items of a slice can be retrieved by Slice property
.item - The type of items returned from
.itemsis based on the slicetype:slice.type == 'dataset_item'=> list ofDatasetItemobjectsslice.type == 'object'=> list ofAnnotation/Predictionobjectsslice.type == 'scene'=> list ofSceneobjects
0.14.24 - 2022-10-19
- Late imports for seldomly used heavy libraries. Sped up CLI invocation and autocomplation. If you had shell completions installed before we recommend removeing them from your .(bash|zsh)rc file and reinstalling with nu install-completions
0.14.23 - 2022-10-17
- Support for building slices via Nucleus' Smart Sample
0.14.22 - 2022-10-14
- Trigger for calculating Validate metrics for a model. This allows underperforming slice discovery and more model analysis
0.14.21 - 2022-09-28
- Support for
context_attachmentmetadata values. See upload metadata for more information.
0.14.20 - 2022-09-23
- Local uploads are correctly batched and prevents flooding the network with requests
0.14.19 - 2022-08-26
- Support for Coordinate metadata values. See upload metadata for more information.
0.14.18 - 2022-08-16
- Metadata and confidence support for scene categories
0.14.17 - 2022-08-15
- Fix
AsyncJobstatus payload keys causing test failures - Fix
AsyncJobexport test - Fix
page_sizefor{Dataset,Slice}.items_and_annotatation_generator() - Change to simple dependency install step to fix CircleCI caching failures
0.14.16 - 2022-08-12
- Scene categorization support
0.14.15 - 2022-08-11
- Removed s3fs, fsspec dependencies for simpler installation in various environments
0.14.14 - 2022-08-11
- client.slices to list all of users slices independent of dataset
- Added optional parameter
asynchronous: booltoDataset.update_item_metadataandDataset.update_scene_metadata, allowing the update to run as a background job when set toTrue
- Validate unit test listing and evaluation history listing. Now uses new bulk fetch endpoints for faster listing.
0.14.13 - 2022-08-10
- Fix payload parsing for scene export
0.14.12 - 2022-08-05
- Added auto-paginated
Slice.export_predictions_generator
- Change
{Dataset,Slice}.items_and_annotation_generatorto work with improved paginate endpoint
0.14.11 - 2022-07-20
- Various docstring and typing updates
0.14.10 - 2022-07-20
Dataset.items_and_annotation_generator()
Slice.items_and_annotation_generator()bug
0.14.9 - 2022-07-14
- NoneType errors in Validate
0.14.8 - 2022-07-14
- Segmentation metrics filtering. Prior version artificially boosted performance when filtering was applied.
0.14.7 - 2022-07-07
- Support running structured queries and retrieving item results via API
0.14.6 - 2022-07-07
Dataset.delete_annotationsnow defaultsreference_idsto an empty list andkeep_historyto true
0.14.5 - 2022-07-05
- Averaging of rich semantic segmentation taxonomies not taking into account missing classes
0.14.4 - 2022-06-21
- Regression that caused Validate filter statements to not work
0.14.3 - 2022-06-21
- CLI installation without GEOS errored out. Now handled by importer.
0.14.2 - 2022-06-21
- Better error reporting when everything is filtered out by a filter statement in a Validate evaluation function
0.14.1 - 2022-06-20
- Adapt Segmentation metrics to better support instance segmentation
- Change Segmentation/Polygon metrics to use new segmentation metrics
0.14.0 - 2022-06-16
- Allow creation/deletion of model tags on new and existing models, eg:
# on model creation
model = client.create_model(name="foo_model", reference_id="foo-model-ref", tags=["some tag"])
# on existing models
existing_model = client.models[0]
existing_model.add_tags(['tag a', 'tag b'])
# remove tag
existing_model.remove_tags(['tag a'])0.13.5 - 2022-06-15
- Guard against invalid skeleton indexes in KeypointsAnnotation
0.13.4 - 2022-06-09
- Guard against extras imports
0.13.3 - 2022-06-09
- Make installation of scale-launch optional (again!).
0.13.2 - 2022-06-08
- Open up requirements for easier installation in more environments. Add more optional installs under
metrics
0.13.1 - 2022-06-08
- Make installation of scale-launch optional
0.13.0 - 2022-06-08
- Segmentation functions to Validate API
0.12.4 - 2022-06-02
- Poetry dependency list
0.12.3 - 2022-06-02
- New methods to export associated Scale task info at either the item or scene level.
Dataset.export_scale_task_infoSlice.export_scale_task_info
0.12.2 - 2022-06-02
- Allow users to upload external evaluation results calculated on the client side.
0.12.1 - 2022-06-02
- Suppress warning statement when un-implemented standard configs found
0.12.0 - 2022-05-27
- Allow users to create external evaluation functions for Scenario Tests in Validate.
0.11.2 - 2022-05-20
- Restored backward compatibility of video constructor by adding back deprecated attachment_type argument
0.11.1 - 2022-05-19
- Exporting model predictions from a slice
0.11.0 - 2022-05-13
- Segmentation prediction masks can now be evaluated against polygon annotation with new Validate functions
- New function SegmentationToPolyIOU, configurable through client.validate.eval_functions.segmentation_to_poly_iou
- New function SegmentationToPolyRecall, configurable through client.validate.eval_functions.segmentation_to_poly_recall
- New function SegmentationToPolyPrecision, configurable through client.validate.eval_functions.segmentation_to_poly_precision
- New function SegmentationToPolyMAP, configurable through client.validate.eval_functions.segmentation_to_poly_map
- New function SegmentationToPolyAveragePrecision, configurable through client.validate.eval_functions.segmentation_to_poly_ap
0.10.8 - 2022-05-10
- Add checks for duplicate (
reference_id,annotation_id) when uploading Annotations or Predictions
0.10.7 - 2022-05-09
- Add checks for duplicate reference IDs
0.10.6 - 2022-05-06
- Video privacy mode
- Removed attachment_type argument in video upload API
0.10.5 - 2022-05-04
- Invalid polygons are dropped from PolygonMetric iou matching
0.10.4) - 2022-05-02
- Additional check added for KeypointsAnnotation names validation
- MP4 video upload
0.10.3 - 2022-04-22
- Polygon and bounding box matching uses Shapely again providing faster evaluations
- Evaluation function passing fixed for Polygon and Boundingbox configurations
0.10.1 - 2022-04-21
- Added check for payload size
0.10.0) - 2022-04-21
KeypointsAnnotationaddedKeypointsPredictionadded
0.9.0 - 2022-04-07
- Validate metrics support metadata and field filtering on input annotation and predictions
- 3D/Cuboid metrics: Recall, Precision, 3D IOU and birds eye 2D IOU```
- Shapely can be used for metric development if the optional scale-nucleus[shapely] is installed
- Full support for passing parameters to evaluation configurations
0.8.4 - 2022-04-06
- Changing
camera_paramsof dataset items can now be done through the dataset methodupdate_items_metadata
0.8.3 - 2022-03-29
- new Validate functionality to intialize scenario tests without a threshold, and to set test thresholds based on a baseline model.
0.8.2 - 2022-03-18
- a fix to the CameraModels enumeration to fix export of camera calibrations for 3D scenes
0.8.1 - 2022-03-18
- slice.items_generator() and dataset.items_generator() to allow for export of dataset items at any scale.
0.8.0 - 2022-03-16
- mask_url can now be a local file for segmentation annotations or predictions, meaning local upload is now supported for segmentations
- Camera params for sensor fusion ingest now support additional camera params to accommodate fisheye camera, etc.
- More detailed parameters to control for upload in case of timeouts (see dataset.upload_predictions, dataset.append, and dataset.upload_predictions)
- Artificially low concurrency for local uploads (all local uploads should be faster now)
- Client no longer uses the deprecated (and now removed) segmentation-specific server endpoints
- Fixed a bug where retries for local uploads were not working properly: should improve local upload robustness
- client.predict, client.annotate, which have been marked as deprecated for several months.
0.7.0 - 2022-03-09
LineAnnotationaddedLinePredictionadded
0.6.7 - 2021-03-08
get_autotag_refinement_metrics- Get model using
model_run_id - Video API change to require
image_locationinstead ofvideo_frame_locationinDatasetItems
0.6.6 - 2021-02-18
- Video upload support
0.6.5 - 2021-02-16
Dataset.update_autotagdocstring formattingBoxPredictiondataclass parameter typingvalidate.scenario_test_evaluationtypo
0.6.4 - 2021-02-16
- Categorization metrics are patched to run properly on Validate evaluation service
0.6.3 - 2021-02-15
- Add categorization f1 score to metrics
0.6.1 - 2021-02-08
- Adapt scipy and click dependencies to allow Google COLAB usage without update
0.6.0 - 2021-02-07
- Nucleus CLI interface
nu. Installation instructions are in theREADME.md.
0.5.4 - 2022-01-28
- Add
NucleusClient.get_jobto retrieveAsyncJobs by job ID
0.5.3 - 2022-01-25
- Add average precision to polygon metrics
- Add mean average precision to polygon metrics
0.5.2 - 2022-01-20
- Add
Dataset.delete_scene
- Removed
Shapelydependency
0.5.1 - 2022-01-11
- Updated dependencies for full Python 3.6 compatibility
0.5.0 - 2022-01-10
nucleus.metricsmodule for computing metrics between NucleusAnnotationandPredictionobjects.
0.4.5 - 2022-01-07
Dataset.scenesproperty that fetches the Scale-generated ID, reference ID, type, and metadata of all scenes in the Dataset.
0.4.4 - 2022-01-04
Slice.export_raw_items()method that fetches accessible (signed) URLs for all items in the Slice.
0.4.3 - 2022-01-03
- Improved error messages for categorization
- Category taxonomies are now updatable
0.4.2 - 2021-12-16
Slice.nameproperty that fetches the Slice's user-defined name.- The Slice's items are no longer fetched unnecessarily; this used to cause considerable latency.
Slice.itemsproperty that fetches all items contained in the Slice.
Slice.info()now only retrieves the Slice'sname,slice_id, anddataset_id.- The Slice's items are no longer fetched unnecessarily; this used to cause considerable latency.
- This method issues a warning to use
Slice.itemswhen attempting toitems.
### Deprecated
NucleusClient.slice_info(..)is deprecated in favor ofSlice.info().
0.4.1 - 2021-12-13
- Datasets in Nucleus now fall under two categories: scene or item.
- Scene Datasets can only have scenes uploaded to them.
- Item Datasets can only have items uploaded to them.
NucleusClient.create_datasetnow requires a boolean parameteris_sceneto immutably set whether the Dataset is a scene or item Dataset.
0.4.0 - 2021-08-12
NucleusClient.modelciclient extension that houses all features related to Model CI, a continuous integration and testing framework for evaluation machine learning models.NucleusClient.modelci.UnitTest- class to represent a Model CI unit test.NucleusClient.modelci.UnitTestEvaluation- class to represent an evaluation result of a Model CI unit test.NucleusClient.modelci.UnitTestItemEvaluation- class to represent an evaluation result of an individual dataset item within a Model CI unit test.NucleusClient.modelci.eval_functions- Collection class housing a library of standard evaluation functions used in computer vision.
0.3.0 - 2021-11-23
NucleusClient.datasetsproperty that lists Datasets in a human friendlier manner thanNucleusClient.list_datasets()NucleusClient.modelsproperty, this is preferred over the deprecatedlist_modelsNucleusClient.jobsproperty.NucleusClient.list_jobsis still the preferred method to use if you filter jobs on access.- Deprecated method access now produces a deprecation warning in the logs.
- Model runs have been deprecated and will be removed in the near future. Use a Model directly instead. The following
functions have all been deprecated as a part of that.
NucleusClient.get_model_run(..)NucleusClient.delete_model_run(..)NucleusClient.create_model_run(..)NucleusClient.commit_model_run(..)NucleusClient.model_run_info(..)NucleusClient.predictions_ref_id(..)NucleusClient.predictions_iloc(..)NucleusClient.predictions_loc(..)Dataset.create_model_run(..)Dataset.model_runs(..)
NucleusClient.list_datasetsis deprecated in favor ofNucleusClient.datasets. The latter allows for direct usage ofDatasetobjects.NucleusClient.list_modelsis deprecated in favor ofNucleusClient.models.NucleusClient.get_dataset_itemsis deprecated in favor ofDataset.itemsto make the object model more consistent.NucleusClient.delete_dataset_itemis deprecated in favor ofDataset.delete_itemto make the object model more consistent.NucleusClient.populate_datasetis deprecated in favor ofDataset.appendto make the object model more consistent.NucleusClient.ingest_tasksis deprecated in favor ofDataset.ingest_tasksto make the object model more consistent.NucleusClient.add_modelis deprecated in favor ofNucleusClient.create_modelfor consistent terminology.NucleusClient.dataset_infois deprecated in favor ofDataset.infoto make the object model more consistent.NucleusClient.delete_annotationsis deprecated in favor ofDataset.delete_annotationsto make the object model more consistent.NucleusClient.predictis deprecated in favor ofDataset.upload_predictionsto make the object model more consistent.NucleusClient.dataitem_ref_idis deprecated in favor ofDataset.reflocto make the object model more consistent.NucleusClient.dataitem_ilocis deprecated in favor ofDataset.ilocto make the object model more consistent.NucleusClient.dataitem_locis deprecated in favor ofDataset.locto make the object model more consistent.NucleusClient.create_sliceis deprecated in favor ofDataset.create_sliceto make the object model more consistent.NucleusClient.create_custom_indexis deprecated in favor ofDataset.create_custom_indexto make the object model more consistent.NucleusClient.delete_custom_indexis deprecated in favor ofDataset.delete_custom_indexto make the object model more consistent.NucleusClient.set_continuous_indexingis deprecated in favor ofDataset.set_continuous_indexingto make the object model more consistent.NucleusClient.create_image_indexis deprecated in favor ofDataset.create_image_indexto make the object model more consistent.NucleusClient.create_object_indexis deprecated in favor ofDataset.create_object_indexto make the object model more consistent.Dataset.append_scenesis deprecated in favor ofDataset.appendfor a simpler interface.