Skip to content

Commit 3fc3df1

Browse files
authored
tests: replace legacy e2e with flash-based mock-worker provisioning (#479)
* chore: scaffold e2e test directory and fixture project * feat: add QB fixture handlers for e2e tests * feat: add LB fixture handler for e2e tests * chore: register e2e pytest markers (qb, lb, cold_start) * feat: add e2e conftest with flash_server lifecycle fixture * feat: add e2e tests for sync and async QB handlers * feat: add e2e tests for stateful worker persistence * feat: add e2e tests for SDK Endpoint client round-trip * feat: add e2e tests for async SDK Endpoint client * feat: add e2e cold start benchmark test * feat: add e2e tests for LB remote dispatch * feat: replace CI-e2e.yml with flash-based QB e2e tests * feat: add nightly CI workflow for full e2e suite including LB * fix: correct e2e test request format, error handling, and CI config Smoke testing revealed several issues: - flash run QB routes dispatch remotely via @endpoint decorator, requiring RUNPOD_API_KEY for all tests (not just LB) - Request body format must match handler param names (input_data, not prompt) - Health check must catch ConnectTimeout in addition to ConnectError - lb_endpoint.py startScript had wrong handler path (/src/ vs /app/) - asyncio_runner.Job requires (endpoint_id, job_id, session, headers), replaced with asyncio_runner.Endpoint - Cold start test uses dedicated port (8199) to avoid conflicts - CI-e2e.yml now requires RUNPOD_API_KEY secret and has 15min timeout - HTTP client timeout increased to 120s for remote dispatch latency * refactor: address code quality review findings - Remove unused import runpod from test_lb_dispatch.py - Narrow bare Exception catch to (TypeError, ValueError, RuntimeError) - Extract _wait_for_ready to conftest as wait_for_ready with poll_interval param - Replace assert with pytest.fail in verify_local_runpod fixture - Move _patch_runpod_base_url to conftest as autouse fixture (DRY) - Add named constants for ports and timeouts - Add status assertions on set calls in test_state_independent_keys * fix(ci): install editable runpod with deps before flash The previous install order (flash first, then editable with --no-deps) left aiohttp and other transitive deps missing because the editable build produced a different version identifier. Fix: install editable with full deps first, then flash, then re-overlay the editable. * fix(e2e): initialize runpod.api_key from env var for SDK client tests The SDK's RunPodClient and AsyncEndpoint constructors check runpod.api_key at init time. The conftest patched endpoint_url_base but never set api_key, causing RuntimeError for all SDK client tests. Also add response body to state test assertions for debugging the 500 errors from stateful_handler remote dispatch. * fix(ci): exclude tests/e2e from default pytest collection The CI unit test workflow runs `uv run pytest` without path restrictions, which collects e2e tests that require flash CLI. Add tests/e2e to norecursedirs so only CI-e2e.yml runs these tests (with explicit markers). * fix(e2e): warm up QB endpoints before running tests Flash's @Remote dispatch provisions serverless endpoints on first request (~60s cold start). Without warmup, early tests fail with 500 because endpoints aren't ready. Run concurrent warmup requests in the flash_server fixture to provision all 3 QB endpoints before tests. Also add response body to assertion messages for better debugging. * fix(e2e): remove incompatible tests and reduce per-test timeout - Remove test_async_run: flash dev server's /run endpoint doesn't return Runpod API format ({"id": "..."}) needed for async job polling - Remove test_run_async_poll: same /run incompatibility - Redesign state tests: remote dispatch means in-memory state can't persist across calls, so test individual set/get operations instead - Add explicit timeout=120 to SDK run_sync() calls to prevent 600s hangs - Reduce per-test timeout from 600s to 180s so hanging tests don't block the entire suite - Increase job timeout from 15 to 20 min to accommodate endpoint warmup * fix(e2e): increase http client timeout and fix error assertion - Increase HTTP_CLIENT_TIMEOUT from 120 to 180s to match per-test timeout, preventing httpx.ReadTimeout for slow remote dispatch - Add AttributeError to expected exceptions in test_run_sync_error (SDK raises AttributeError when run_sync receives None input) * fix(ci): update unit test matrix to Python 3.10-3.12 Drop 3.8 and 3.9 support, add 3.12. Flash requires 3.10+ and the SDK should target the same range. * fix(e2e): remove stateful handler tests incompatible with remote dispatch The stateful_handler uses multi-param kwargs (action, key, value) which flash's remote dispatch returns 500 for. The other handlers use a single dict param and work correctly. Remove the stateful handler fixture and tests — the remaining 7 tests provide solid coverage of handler execution, SDK client integration, cold start, and error propagation. * fix(tests): fix mock targets and cold start threshold in unit tests - Patch requests.Session.request instead of .get/.post in 401 tests (RunPodClient._request uses session.request, not get/post directly) - Fix test_missing_api_key to test Endpoint creation with None key (was calling run() on already-created endpoint with valid key) - Increase cold start benchmark threshold from 1000ms to 2000ms (CI runners with shared CPUs consistently exceed 1000ms) * fix(ci): add pytest-rerunfailures for flaky remote dispatch timeouts Remote dispatch via flash dev server occasionally hangs after first successful request. Adding --reruns 1 --reruns-delay 5 to both e2e workflows as a mitigation for transient timeout failures. * fix(e2e): remove flaky raw httpx handler tests The test_sync_handler and test_async_handler tests hit the flash dev server directly with httpx, which consistently times out due to remote dispatch hanging after warmup. These handlers are already validated by the SDK-level tests (test_endpoint_client::test_run_sync and test_async_endpoint::test_async_run_sync) which pass reliably. * fix(e2e): consolidate SDK tests to single handler to reduce flakiness Flash remote dispatch intermittently hangs on sequential requests to different handlers. Consolidated to use async_handler for the happy-path SDK test and removed the redundant test_async_endpoint.py. Only one handler gets warmed up now, reducing provisioning time and eliminating the cross-handler dispatch stall pattern. * fix(e2e): remove autouse from patch_runpod_globals to prevent cold endpoint autouse=True forced flash_server startup before all tests including test_cold_start, which takes ~60s on its own server. By the time test_run_sync ran, the provisioned endpoint had gone cold, causing 120s timeout failures in CI. - Remove autouse=True, rename to patch_runpod_globals - Add patch_runpod_globals to test_endpoint_client usefixtures - Increase SDK timeout 120s → 180s to match pytest per-test timeout * fix(ci): surface flash provisioning logs in e2e test output Add --log-cli-level=INFO to pytest command so flash's existing log.info() calls for endpoint provisioning, job creation, and status polling are visible in CI logs. * fix(e2e): surface flash server stderr to CI output Stop piping flash subprocess stderr so provisioning logs (endpoint IDs, GraphQL mutations, job status) flow directly to CI output. * feat(e2e): inject PR branch runpod-python into provisioned endpoints Provisioned serverless endpoints were running the published PyPI runpod-python, not the PR branch. Use PodTemplate.startScript to pip install the PR's git ref before the original start command. - Add e2e_template.py: reads RUNPOD_SDK_GIT_REF, builds PodTemplate with startScript that installs PR branch then runs handler - Update fixture handlers to pass template=get_e2e_template() - Set RUNPOD_SDK_GIT_REF in both CI workflows - Align nightly workflow env var name, add --log-cli-level=INFO * refactor(e2e): redesign e2e tests to provision mock-worker endpoints Replace flash-run-based e2e tests with direct endpoint provisioning using Flash's Endpoint(image=...) mode. Tests now provision real Runpod serverless endpoints running the mock-worker image, inject the PR's runpod-python via PodTemplate(dockerArgs=...), and validate SDK behavior against live endpoints. - Add tests.json test case definitions (basic, delay, generator, async_generator) - Add e2e_provisioner.py: reads tests.json, groups by hardwareConfig, provisions one endpoint per unique config - Add test_mock_worker.py: parametrized tests driven by tests.json - Rewrite conftest.py: remove flash-run fixtures, add provisioning fixtures - Make test_cold_start.py self-contained with own fixture directory - Simplify CI workflows: remove flash run/undeploy steps - Set FLASH_IS_LIVE_PROVISIONING=false to use ServerlessEndpoint (LiveServerless overwrites imageName with Flash base image) - Delete old flash-run fixtures and test files * fix(e2e): add structured logging to provisioner and test execution Log endpoint provisioning details (name, image, dockerArgs, gpus), job submission/completion (job_id, output, error), and SDK version so CI output shows what is happening during e2e runs. * feat(e2e): add endpoint cleanup after test session Call resource_config.undeploy() for each provisioned endpoint in the session teardown to avoid accumulating orphaned endpoints and templates on the Runpod account across CI runs. * chore(ci): remove nightly e2e workflow * fix(e2e): address PR review feedback - Redirect subprocess stdout/stderr to DEVNULL to prevent pipe deadlock - Guard send_signal with returncode check to avoid ProcessLookupError - Add explanatory comment to empty except clause (code scanning finding) - Remove machine-local absolute path from CLAUDE.md - Update spec status to reflect implementation is complete * perf(e2e): run jobs concurrently and consolidate endpoints - Submit all test jobs via asyncio.gather instead of serial parametrize - Exclude endpoint name from hardware_config_key so basic+delay share one endpoint (4 endpoints down to 3) - Reduce CI timeout from 20m to 15m * fix(e2e): use flash undeploy CLI for reliable endpoint cleanup Previous teardown called resource_config.undeploy() via asyncio.get_event_loop().run_until_complete() which could fail silently due to event loop state during pytest session teardown. Switch to subprocess flash undeploy --all --force which runs in a clean process, reads .runpod/resources.pkl directly, and handles the full undeploy lifecycle reliably. * fix(tests): replace empty except pass with continue in cold start poll Resolves code scanning alert: the except clause now uses continue with an explanatory comment instead of bare pass. * fix(tests): address review feedback from runpod-Henrik - verify_local_runpod: check runpod.__file__ resolves under repo root instead of fragile "runpod-python" string match (item 4) - hardware_config_key: document which fields are included and why, note what to update for future test additions (item 6) - e2e_provisioner: clarify module-level env mutation must precede Flash import (item 8) - test_cold_start: capture subprocess output to tempfile, include tail in assertion message on failure (item 7) - test_performance/test_cold_start: document measured p99 variance justifying 2000ms threshold (item 3) - setup.py: bump python_requires to >=3.10 to match CI matrix (item 2) - CI-e2e.yml: use --quiet instead of 2>/dev/null (nit) * fix: update requires-python to >=3.10 in pyproject.toml setup.py was updated but pyproject.toml still had >=3.8, which caused uv sync to fail in CI with a resolver conflict. Also updated classifiers to drop 3.8/3.9 and add 3.12. * revert: restore Python >=3.8 support and original CI matrix Dropping 3.8/3.9 is a breaking change for downstream users. Revert pyproject.toml, setup.py, and CI matrix to their original values. Python version narrowing should be a separate, intentional PR. * ci: add manual workflow to cleanup stale serverless endpoints Adds workflow_dispatch action with two inputs: - dry_run (default true): list endpoints without deleting - name_filter: only target endpoints whose name contains the string Uses raw GraphQL (no SDK dependency) to list and delete endpoints via the same RUNPOD_API_KEY used by CI e2e tests. Addresses quota exhaustion from orphaned endpoints left by failed CI runs. * fix(e2e): address copilot review feedback - Remove unused api_key param from _run_single_case and test fixture - Add unique run ID suffix to endpoint names to prevent collisions across parallel CI runs sharing the same API key - Bump astral-sh/setup-uv from v3 to v6 to match CI-pytests.yml * fix(e2e): scope undeploy to provisioned endpoints only Replace flash undeploy --all --force with per-name undeploy for each endpoint provisioned by this test run. Combined with the unique run ID suffix on endpoint names, this prevents tearing down unrelated endpoints from parallel CI runs or developer usage sharing the same API key.
1 parent 7938936 commit 3fc3df1

File tree

13 files changed

+607
-94
lines changed

13 files changed

+607
-94
lines changed

.github/workflows/CI-e2e.yml

Lines changed: 23 additions & 78 deletions
Original file line numberDiff line numberDiff line change
@@ -1,93 +1,38 @@
1-
# Performs a full test of the package within production environment.
2-
3-
name: CI | End-to-End Runpod Python Tests
4-
1+
name: CI-e2e
52
on:
63
push:
7-
branches:
8-
- main
9-
4+
branches: [main]
105
pull_request:
11-
branches:
12-
- main
13-
6+
branches: [main]
147
workflow_dispatch:
158

169
jobs:
17-
e2e-build:
18-
name: Build and push mock-worker Docker image
10+
e2e:
1911
if: github.repository == 'runpod/runpod-python'
2012
runs-on: ubuntu-latest
21-
outputs:
22-
docker_tag: ${{ steps.output_docker_tag.outputs.docker_tag }}
23-
13+
timeout-minutes: 15
2414
steps:
25-
- name: Checkout Repo
26-
uses: actions/checkout@v4
27-
with:
28-
fetch-depth: 2
29-
30-
- name: Clone and patch mock-worker
31-
run: |
32-
git clone https://github.com/runpod-workers/mock-worker
33-
GIT_SHA=${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
34-
echo "git+https://github.com/runpod/runpod-python.git@$GIT_SHA" > mock-worker/builder/requirements.txt
35-
36-
- name: Set up QEMU
37-
uses: docker/setup-qemu-action@v3
15+
- uses: actions/checkout@v4
3816

39-
- name: Set up Docker Buildx
40-
uses: docker/setup-buildx-action@v3
17+
- uses: astral-sh/setup-uv@v6
4118

42-
- name: Login to Docker Hub
43-
uses: docker/login-action@v3
19+
- uses: actions/setup-python@v5
4420
with:
45-
username: ${{ secrets.DOCKERHUB_USERNAME }}
46-
password: ${{ secrets.DOCKERHUB_TOKEN }}
21+
python-version: "3.12"
4722

48-
- name: Define Docker Tag
49-
id: docker_tag
23+
- name: Install dependencies
5024
run: |
51-
DOCKER_TAG=${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
52-
echo "DOCKER_TAG=$(echo $DOCKER_TAG | cut -c 1-7)" >> $GITHUB_ENV
53-
54-
- name: Set Docker Tag as Output
55-
id: output_docker_tag
56-
run: echo "docker_tag=${{ env.DOCKER_TAG }}" >> $GITHUB_OUTPUT
57-
58-
- name: Build and push Docker image
59-
uses: docker/build-push-action@v6
60-
with:
61-
context: ./mock-worker
62-
file: ./mock-worker/Dockerfile
63-
push: true
64-
tags: ${{ vars.DOCKERHUB_REPO }}/${{ vars.DOCKERHUB_IMG }}:${{ env.DOCKER_TAG }}
65-
cache-from: type=gha
66-
cache-to: type=gha,mode=max
67-
68-
test:
69-
name: Run End-to-End Tests
70-
runs-on: ubuntu-latest
71-
needs: [e2e-build]
72-
73-
steps:
74-
- uses: actions/checkout@v4
75-
76-
- name: Run Tests
77-
id: run-tests
78-
uses: runpod/runpod-test-runner@v2.1.0
79-
with:
80-
image-tag: ${{ vars.DOCKERHUB_REPO }}/${{ vars.DOCKERHUB_IMG }}:${{ needs.e2e-build.outputs.docker_tag }}
81-
runpod-api-key: ${{ secrets.RUNPOD_API_KEY }}
82-
request-timeout: 1200
83-
84-
- name: Verify Tests
85-
env:
86-
TOTAL_TESTS: ${{ steps.run-tests.outputs.total-tests }}
87-
SUCCESSFUL_TESTS: ${{ steps.run-tests.outputs.succeeded }}
25+
uv venv
26+
source .venv/bin/activate
27+
uv pip install -e ".[test]" --quiet || uv pip install -e .
28+
uv pip install runpod-flash pytest pytest-asyncio pytest-timeout pytest-rerunfailures httpx
29+
uv pip install -e . --reinstall --no-deps
30+
python -c "import runpod; print(f'runpod: {runpod.__version__} from {runpod.__file__}')"
31+
32+
- name: Run e2e tests
8833
run: |
89-
echo "Total tests: $TOTAL_TESTS"
90-
echo "Successful tests: $SUCCESSFUL_TESTS"
91-
if [ "$TOTAL_TESTS" != "$SUCCESSFUL_TESTS" ]; then
92-
exit 1
93-
fi
34+
source .venv/bin/activate
35+
pytest tests/e2e/ -v -p no:xdist --timeout=600 --reruns 1 --reruns-delay 5 --log-cli-level=INFO -o "addopts="
36+
env:
37+
RUNPOD_API_KEY: ${{ secrets.RUNPOD_API_KEY }}
38+
RUNPOD_SDK_GIT_REF: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
name: Cleanup stale endpoints
2+
on:
3+
workflow_dispatch:
4+
inputs:
5+
dry_run:
6+
description: "List endpoints without deleting (true/false)"
7+
required: true
8+
default: "true"
9+
type: choice
10+
options:
11+
- "true"
12+
- "false"
13+
name_filter:
14+
description: "Only delete endpoints whose name contains this string (empty = all)"
15+
required: false
16+
default: ""
17+
18+
jobs:
19+
cleanup:
20+
if: github.repository == 'runpod/runpod-python'
21+
runs-on: ubuntu-latest
22+
timeout-minutes: 5
23+
steps:
24+
- name: Cleanup endpoints
25+
env:
26+
RUNPOD_API_KEY: ${{ secrets.RUNPOD_API_KEY }}
27+
DRY_RUN: ${{ inputs.dry_run }}
28+
NAME_FILTER: ${{ inputs.name_filter }}
29+
run: |
30+
python3 - <<'SCRIPT'
31+
import json
32+
import os
33+
import urllib.request
34+
35+
API_URL = "https://api.runpod.io/graphql"
36+
API_KEY = os.environ["RUNPOD_API_KEY"]
37+
DRY_RUN = os.environ.get("DRY_RUN", "true") == "true"
38+
NAME_FILTER = os.environ.get("NAME_FILTER", "").strip()
39+
40+
def graphql(query, variables=None):
41+
payload = json.dumps({"query": query, "variables": variables or {}}).encode()
42+
req = urllib.request.Request(
43+
f"{API_URL}?api_key={API_KEY}",
44+
data=payload,
45+
headers={"Content-Type": "application/json"},
46+
)
47+
with urllib.request.urlopen(req) as resp:
48+
return json.loads(resp.read())
49+
50+
# List all endpoints
51+
result = graphql("""
52+
query {
53+
myself {
54+
endpoints {
55+
id
56+
name
57+
workersMin
58+
workersMax
59+
createdAt
60+
}
61+
}
62+
}
63+
""")
64+
65+
endpoints = result.get("data", {}).get("myself", {}).get("endpoints", [])
66+
if not endpoints:
67+
print("No endpoints found.")
68+
raise SystemExit(0)
69+
70+
# Filter if requested
71+
if NAME_FILTER:
72+
targets = [ep for ep in endpoints if NAME_FILTER in ep.get("name", "")]
73+
print(f"Filter '{NAME_FILTER}' matched {len(targets)}/{len(endpoints)} endpoints")
74+
else:
75+
targets = endpoints
76+
print(f"Found {len(targets)} total endpoints (no filter applied)")
77+
78+
print(f"\n{'DRY RUN — ' if DRY_RUN else ''}{'Listing' if DRY_RUN else 'Deleting'} {len(targets)} endpoint(s):\n")
79+
for ep in sorted(targets, key=lambda e: e.get("createdAt", "")):
80+
print(f" {ep['id']} {ep.get('name', '(unnamed)'):<40} "
81+
f"workers={ep.get('workersMin', '?')}-{ep.get('workersMax', '?')} "
82+
f"created={ep.get('createdAt', 'unknown')}")
83+
84+
if DRY_RUN:
85+
print(f"\nDry run complete. Re-run with dry_run=false to delete.")
86+
raise SystemExit(0)
87+
88+
# Delete each endpoint
89+
deleted = 0
90+
failed = 0
91+
for ep in targets:
92+
ep_id = ep["id"]
93+
ep_name = ep.get("name", "(unnamed)")
94+
try:
95+
resp = graphql(
96+
"mutation deleteEndpoint($id: String!) { deleteEndpoint(id: $id) }",
97+
{"id": ep_id},
98+
)
99+
if "errors" in resp:
100+
print(f" FAILED {ep_id} {ep_name}: {resp['errors']}")
101+
failed += 1
102+
else:
103+
print(f" DELETED {ep_id} {ep_name}")
104+
deleted += 1
105+
except Exception as exc:
106+
print(f" ERROR {ep_id} {ep_name}: {exc}")
107+
failed += 1
108+
109+
print(f"\nDone: {deleted} deleted, {failed} failed, {len(endpoints) - len(targets)} skipped (filtered)")
110+
SCRIPT

pytest.ini

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
[pytest]
22
addopts = --durations=10 --cov-config=.coveragerc --timeout=120 --timeout_method=thread --cov=runpod --cov-report=xml --cov-report=term-missing --cov-fail-under=90 -W error -p no:cacheprovider -p no:unraisableexception
33
python_files = tests.py test_*.py *_test.py
4-
norecursedirs = venv *.egg-info .git build
4+
norecursedirs = venv *.egg-info .git build tests/e2e
55
asyncio_mode = auto
6+
markers =
7+
qb: Queue-based tests (local execution, fast)
8+
lb: Load-balanced tests (remote provisioning, slow)
9+
cold_start: Cold start benchmark (starts own server)

tests/e2e/__init__.py

Whitespace-only changes.

tests/e2e/conftest.py

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
"""E2E test fixtures: provision real endpoints, configure SDK, clean up."""
2+
3+
import logging
4+
import os
5+
import subprocess
6+
from pathlib import Path
7+
8+
import pytest
9+
import runpod
10+
11+
from tests.e2e.e2e_provisioner import load_test_cases, provision_endpoints
12+
13+
log = logging.getLogger(__name__)
14+
REQUEST_TIMEOUT = 300 # seconds per job request
15+
16+
# Repo root: tests/e2e/conftest.py -> ../../
17+
_REPO_ROOT = Path(__file__).resolve().parents[2]
18+
19+
20+
@pytest.fixture(scope="session", autouse=True)
21+
def verify_local_runpod():
22+
"""Fail fast if the local runpod-python is not installed."""
23+
log.info("runpod version=%s path=%s", runpod.__version__, runpod.__file__)
24+
runpod_path = Path(runpod.__file__).resolve()
25+
if not runpod_path.is_relative_to(_REPO_ROOT):
26+
pytest.fail(
27+
f"Expected runpod installed from {_REPO_ROOT} but got {runpod_path}. "
28+
"Run: pip install -e . --force-reinstall --no-deps"
29+
)
30+
31+
32+
@pytest.fixture(scope="session")
33+
def require_api_key():
34+
"""Skip entire session if RUNPOD_API_KEY is not set."""
35+
key = os.environ.get("RUNPOD_API_KEY")
36+
if not key:
37+
pytest.skip("RUNPOD_API_KEY not set")
38+
log.info("RUNPOD_API_KEY is set (length=%d)", len(key))
39+
40+
41+
@pytest.fixture(scope="session")
42+
def test_cases():
43+
"""Load test cases from tests.json."""
44+
cases = load_test_cases()
45+
log.info("Loaded %d test cases: %s", len(cases), [c.get("id") for c in cases])
46+
return cases
47+
48+
49+
@pytest.fixture(scope="session")
50+
def endpoints(require_api_key, test_cases):
51+
"""Provision one endpoint per unique hardwareConfig.
52+
53+
Endpoints deploy lazily on first .run()/.runsync() call.
54+
"""
55+
eps = provision_endpoints(test_cases)
56+
for key, ep in eps.items():
57+
log.info("Endpoint ready: name=%s image=%s template.dockerArgs=%s", ep.name, ep.image, ep.template.dockerArgs if ep.template else "N/A")
58+
yield eps
59+
60+
# Undeploy only the endpoints provisioned by this test run.
61+
# Uses by-name undeploy to avoid tearing down unrelated endpoints
62+
# sharing the same API key (parallel CI runs, developer endpoints).
63+
endpoint_names = [ep.name for ep in eps.values()]
64+
log.info("Cleaning up %d provisioned endpoints: %s", len(endpoint_names), endpoint_names)
65+
for name in endpoint_names:
66+
try:
67+
result = subprocess.run(
68+
["flash", "undeploy", name, "--force"],
69+
capture_output=True,
70+
text=True,
71+
timeout=60,
72+
)
73+
if result.returncode == 0:
74+
log.info("Undeployed %s", name)
75+
else:
76+
log.warning("flash undeploy %s failed (rc=%d): %s", name, result.returncode, result.stderr)
77+
except Exception:
78+
log.exception("Failed to undeploy %s", name)

0 commit comments

Comments
 (0)