diff --git a/.chronus/changes/python-addBackApiViewSphinx-2026-3-9-12-35-1.md b/.chronus/changes/python-addBackApiViewSphinx-2026-3-9-12-35-1.md new file mode 100644 index 00000000000..746bd59aabb --- /dev/null +++ b/.chronus/changes/python-addBackApiViewSphinx-2026-3-9-12-35-1.md @@ -0,0 +1,7 @@ +--- +changeKind: internal +packages: + - "@typespec/http-client-python" +--- + +Add apiview and sphinx to ci \ No newline at end of file diff --git a/.github/skills/emitter-prep-for-pr/SKILL.md b/.github/skills/emitter-prep-for-pr/SKILL.md index 42a5e1d6339..315fcfbd8a1 100644 --- a/.github/skills/emitter-prep-for-pr/SKILL.md +++ b/.github/skills/emitter-prep-for-pr/SKILL.md @@ -12,7 +12,7 @@ description: > # Emitter Prep for PR Prepares language emitter changes for pull request by running build/format/lint, -creating a changeset with an appropriate message, and pushing to the remote branch. +checking for a changeset, and pushing to the remote branch. **This skill is for language emitter packages only:** @@ -24,116 +24,71 @@ Do NOT use this skill for core TypeSpec packages (compiler, http, openapi3, etc. ## Workflow -### Step 1: Identify changed language emitter packages +### Step 1: Determine the emitter package -Determine which language emitter packages have changes: +Figure out which emitter package the user is working on from context (cwd, recent +changes, or ask). The package will be under `packages//`. -```bash -cd ~/Desktop/github/typespec - -# Compare against upstream/main (microsoft/typespec) if available, otherwise main -BASE_BRANCH=$(git rev-parse --verify upstream/main 2> /dev/null && echo "upstream/main" || echo "main") - -# Filter for language emitter packages only -git diff "$BASE_BRANCH" --name-only | grep "^packages/http-client-" | cut -d'/' -f2 | sort -u -``` - -This filters for `http-client-python`, `http-client-csharp`, `http-client-java`, etc. - -### Step 2: Validate each changed emitter package - -For each changed emitter package (e.g., `http-client-python`, `http-client-csharp`, `http-client-java`): +### Step 2: Build, format, and lint the emitter package ```bash cd ~/Desktop/github/typespec/packages/PACKAGE_NAME # Build npm run build -if [ $? -ne 0 ]; then - echo "Build failed for PACKAGE_NAME" - exit 1 -fi -# Format +# Format (includes both TypeScript and Python formatting) npm run format -if [ $? -ne 0 ]; then - echo "Format failed for PACKAGE_NAME" - exit 1 -fi -# Lint (if available) -npm run lint 2> /dev/null || echo "No lint script for PACKAGE_NAME" +# Lint (emitter-only is fine for quick validation) +npm run lint -- --emitter ``` -If any step fails, report the error and stop. Do not proceed to changeset. - -### Step 3: Run format and spell check at repo root +If any step fails, report the error and stop. Do not proceed. -After validating individual packages, run format and spell check at the repo root: +### Step 3: Run format at repo root ```bash cd ~/Desktop/github/typespec - -# Format all files pnpm format - -# Spell check -pnpm cspell ``` -If spell check fails, either fix the typos or add words to the cspell dictionary. +**Important:** `pnpm format` may touch files outside the emitter package (e.g., +`.devcontainer/`, other packages). When staging changes in Step 6, **only stage +files within the emitter package directory** (`packages/PACKAGE_NAME/`) and +`.chronus/changes/` and `.github/skills/`. Discard any formatting changes to +unrelated files with `git checkout -- `. -### Step 4: Analyze changes for changeset message +### Step 4: Check for existing changeset -Examine the changes to determine an appropriate changeset message: +Check if a changeset already exists for the current branch: ```bash cd ~/Desktop/github/typespec - -# Determine base branch -BASE_BRANCH=$(git rev-parse --verify upstream/main 2> /dev/null && echo "upstream/main" || echo "main") - -# Get commit messages on this branch -git log "$BASE_BRANCH"..HEAD --oneline - -# Get changed files -git diff "$BASE_BRANCH" --name-only - -# Get the actual code changes (for understanding intent) -git diff "$BASE_BRANCH" --stat +BRANCH=$(git rev-parse --abbrev-ref HEAD) +ls .chronus/changes/ | grep -i "$BRANCH" || echo "NO_CHANGESET" ``` -### Step 5: Determine changeset parameters - -Based on the changes, determine: - -1. **changeKind** - one of: - - `internal` - Internal changes not user-facing (tests, docs, refactoring) - - `fix` - Bug fixes (patch version bump) - - `feature` - New features (minor version bump) - - `deprecation` - Deprecating existing features (minor version bump) - - `breaking` - Breaking changes (major version bump) - - `dependencies` - Dependency bumps (patch version bump) +- If a changeset **exists**: Skip to Step 6 (no need to create one). +- If **NO_CHANGESET**: Proceed to Step 5 to create one. -2. **packages** - affected packages, e.g.: - - `@typespec/http-client-python` - - `@typespec/http-client-csharp` - - `@typespec/http-client-java` +### Step 5: Create changeset (only if none exists) -3. **message** - concise description of the change +Ask the user what kind of change this is, or infer from context: -### Step 6: Create changeset file +1. **changeKind** - one of: `internal`, `fix`, `feature`, `deprecation`, `breaking`, `dependencies` +2. **message** - concise user-focused description -Create a changeset file in `.chronus/changes/`: +Then create the file: ```bash cd ~/Desktop/github/typespec - -# Generate filename with timestamp +BRANCH=$(git rev-parse --abbrev-ref HEAD) TIMESTAMP=$(date +"%Y-%m-%d-%H-%M-%S") -FILENAME=".chronus/changes/BRANCH_NAME-${TIMESTAMP}.md" +FILENAME=".chronus/changes/${BRANCH}-${TIMESTAMP}.md" +``` -cat > "$FILENAME" << 'EOF' +```markdown --- changeKind: packages: @@ -141,159 +96,49 @@ packages: --- -EOF -``` - -### Step 7: Show changes and prompt user - -Display all changes to the user: - -```bash -cd ~/Desktop/github/typespec -git status -git diff --stat ``` -Then use AskUserQuestion to confirm: - -- Show the changeset that will be added -- Show the files that will be committed -- Show which remote will be used: "Will push to `origin`" -- Ask: "Do these changes look good to push to origin?" - -Options: - -- "Yes, push to origin" - proceed with commit and push to origin -- "Push to different remote" - ask which remote to use instead -- "Edit changeset" - let user modify the changeset message/kind -- "Cancel" - abort without pushing - -If user selects "Push to different remote", ask which remote name to use and push to that instead of origin. - -### Step 8: Commit and push (if approved) - -If user approves, commit the changes: +### Step 6: Stage, commit, and push ```bash cd ~/Desktop/github/typespec - -# Stage all changes git add -A - -# Commit with descriptive message -git commit -m "$( - cat << 'EOF' - - -Co-Authored-By: Claude Opus 4.5 -EOF -)" +git status ``` -Then push to the user's fork. **Default to `origin`**, but if the user specified a different remote, use that instead: +Show the user what will be committed and ask for confirmation. Then: ```bash -# Get current branch name BRANCH=$(git rev-parse --abbrev-ref HEAD) +git commit -m " -# Push to origin by default (or user-specified remote) -git push -u origin "$BRANCH" -``` - -### Asking about remote - -When prompting the user in Step 7, include the remote that will be used: - -- Show: "Will push to `origin` (your fork)" -- If the user says to use a different remote (e.g., "push to `my_fork`"), use that instead - -**Important:** Never push directly to the `microsoft/typespec` remote (usually named `upstream`). - -## Changeset Message Guidelines - -Write changeset messages that are: +Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>" -1. **User-focused** - Describe the impact on users, not implementation details -2. **Concise** - One sentence, starting with a verb (Add, Fix, Update, Remove) -3. **Specific** - Mention the feature/fix clearly - -### Examples by changeKind: - -**internal:** - -- "Refactor namespace resolution logic for clarity" -- "Add mock API tests for paging scenarios" -- "Update development tooling and skills" - -**fix:** - -- "Fix incorrect deserialization of nullable enum properties" -- "Fix client initialization when using custom endpoints" - -**feature:** +# Always push to origin (user's fork), never upstream +git push origin "$BRANCH" +``` -- "Add support for XML serialization in request bodies" -- "Add `@clientOption` decorator for customizing client behavior" +## Changeset Guidelines -**deprecation:** +### changeKind reference -- "Deprecate `legacyMode` option in favor of `compatibilityMode`" +- **internal**: Tests, CI/CD, refactoring, docs, skills — not user-facing +- **fix**: Bug fixes users would notice +- **feature**: New user-facing capabilities +- **deprecation**: Marking something as deprecated +- **breaking**: Removing or changing behavior incompatibly +- **dependencies**: Dependency version bumps -**breaking:** +### Message examples -- "Remove deprecated `v1` client generation mode" -- "Change default serialization format from XML to JSON" +- `internal`: "Improve CI pipeline performance and test infrastructure" +- `fix`: "Fix incorrect deserialization of nullable enum properties" +- `feature`: "Add support for XML serialization in request bodies" -## Language Emitter Package Names +### Package names | Folder | Package Name | | -------------------- | ------------------------------ | | `http-client-python` | `@typespec/http-client-python` | | `http-client-csharp` | `@typespec/http-client-csharp` | | `http-client-java` | `@typespec/http-client-java` | - -## Notes - -### When to use each changeKind - -- **internal**: Tests, documentation, refactoring, CI/CD changes, skill updates -- **fix**: Bug fixes that users would notice -- **feature**: New capabilities users can use -- **deprecation**: Marking something as deprecated (still works, but discouraged) -- **breaking**: Removing or changing behavior in incompatible ways - -### Multiple packages - -If changes affect multiple packages **with the same change kind**, list all of them in a single changeset: - -```yaml -packages: - - "@typespec/http-client-python" - - "@typespec/http-client-csharp" -``` - -**If packages have different change kinds, create separate changeset files for each.** For example, if the PR adds a feature to `@typespec/http-client-python` and fixes a bug in `@typespec/http-client-csharp`, create two files: - -```yaml -# File 1: feature for python -changeKind: feature -packages: - - "@typespec/http-client-python" -``` - -```yaml -# File 2: fix for csharp -changeKind: fix -packages: - - "@typespec/http-client-csharp" -``` - -### Skipping changeset - -Some changes don't need a changeset: - -- Changes only to `.github/skills/` (CI will allow this) -- Changes only to test files (if marked in `changedFiles` config) -- Changes only to markdown files (if marked in `changedFiles` config) - -Check `.chronus/config.yaml` for `changedFiles` patterns that are excluded. diff --git a/.gitignore b/.gitignore index bd47e87ea2e..c0558a9a41f 100644 --- a/.gitignore +++ b/.gitignore @@ -233,7 +233,10 @@ packages/http-client-python/tests/**/cadl-ranch-coverage.json !packages/http-client-python/package-lock.json packages/http-client-python/micropip.lock packages/http-client-python/venv_build_wheel/ +packages/http-client-python/tests/**.json # http-server-js emitter packages/http-server-js/test/e2e/generated .pnpm-store/ +packages/http-client-python/tests/.uv-cache/ +packages/http-client-python/tests/.wheels/ diff --git a/packages/http-client-python/emitter/src/emitter.ts b/packages/http-client-python/emitter/src/emitter.ts index fdcfeeef10c..49110135355 100644 --- a/packages/http-client-python/emitter/src/emitter.ts +++ b/packages/http-client-python/emitter/src/emitter.ts @@ -42,21 +42,24 @@ function addDefaultOptions(sdkContext: PythonSdkContext) { const packageName = namespace.replace(/\./g, "-"); options["package-name"] = packageName; } - if ((options as any).flavor !== "azure") { - // if they pass in a flavor other than azure, we want to ignore the value - (options as any).flavor = undefined; - } + // Set flavor based on namespace or passed option if (getRootNamespace(sdkContext).toLowerCase().includes("azure")) { (options as any).flavor = "azure"; + } else if ((options as any).flavor !== "azure") { + // Explicitly set unbranded flavor when not azure + (options as any).flavor = "unbranded"; } if ( options["package-pprint-name"] !== undefined && !options["package-pprint-name"].startsWith('"') ) { - options["package-pprint-name"] = options["use-pyodide"] - ? `${options["package-pprint-name"]}` - : `"${options["package-pprint-name"]}"`; + // Only add quotes for shell compatibility when NOT using emit-yaml-only mode + // (emit-yaml-only passes options via JSON config files, not shell) + const needsShellQuoting = !options["use-pyodide"] && !options["emit-yaml-only"]; + options["package-pprint-name"] = needsShellQuoting + ? `"${options["package-pprint-name"]}"` + : `${options["package-pprint-name"]}`; } } @@ -246,6 +249,21 @@ async function onEmitMain(context: EmitContext) { const yamlPath = await saveCodeModelAsYaml("python-yaml-path", parsedYamlMap); if (!program.compilerOptions.noEmit && !program.hasError()) { + // If emit-yaml-only mode, just copy YAML to output dir for batch processing + if (resolvedOptions["emit-yaml-only"]) { + if (!fs.existsSync(outputDir)) { + fs.mkdirSync(outputDir, { recursive: true }); + } + // Copy YAML to output dir with command args embedded + // Use unique filename to avoid conflicts when multiple specs share output dir + const configId = path.basename(yamlPath, ".yaml"); + const batchConfig = { yamlPath, commandArgs, outputDir }; + fs.writeFileSync( + path.join(outputDir, `.tsp-codegen-${configId}.json`), + JSON.stringify(batchConfig, null, 2), + ); + return; + } // if not using pyodide and there's no venv, we try to create venv if (!resolvedOptions["use-pyodide"] && !fs.existsSync(path.join(root, "venv"))) { try { diff --git a/packages/http-client-python/emitter/src/lib.ts b/packages/http-client-python/emitter/src/lib.ts index 2d6010108e0..b267c836bf3 100644 --- a/packages/http-client-python/emitter/src/lib.ts +++ b/packages/http-client-python/emitter/src/lib.ts @@ -25,6 +25,7 @@ export interface PythonEmitterOptions { "use-pyodide"?: boolean; "keep-setup-py"?: boolean; "clear-output-folder"?: boolean; + "emit-yaml-only"?: boolean; } export interface PythonSdkContext extends SdkContext { @@ -110,6 +111,12 @@ export const PythonEmitterOptionsSchema: JSONSchemaType = description: "Whether to clear the output folder before generating the code. Defaults to `false`.", }, + "emit-yaml-only": { + type: "boolean", + nullable: true, + description: + "Emit YAML code model only, without running Python generator. For batch processing.", + }, }, required: [], }; diff --git a/packages/http-client-python/eng/scripts/Test-Packages.ps1 b/packages/http-client-python/eng/scripts/Test-Packages.ps1 index cbfd684b154..05ea968b8cb 100644 --- a/packages/http-client-python/eng/scripts/Test-Packages.ps1 +++ b/packages/http-client-python/eng/scripts/Test-Packages.ps1 @@ -74,17 +74,6 @@ try { Invoke-LoggedCommand "npm run ci" Write-Host "All tests passed." -ForegroundColor Green - - # Linux specific: check mypy/lint/pyright on generated code - if ($IsLinux) { - Write-Host "`n=== Running lint on generated code ===" -ForegroundColor Cyan - Invoke-LoggedCommand "npm run lint:generated" - Write-Host "Generated code checks passed." -ForegroundColor Green - - Write-Host "`n=== Running mypy/pyright on generated code ===" -ForegroundColor Cyan - Invoke-LoggedCommand "npm run typecheck:generated" - Write-Host "Generated code mypy/pyright checks passed." -ForegroundColor Green - } } } finally { diff --git a/packages/http-client-python/eng/scripts/ci/config/eslint-ci.config.mjs b/packages/http-client-python/eng/scripts/ci/config/eslint-ci.config.mjs index b8a1418a63e..3b51a3fc0d0 100644 --- a/packages/http-client-python/eng/scripts/ci/config/eslint-ci.config.mjs +++ b/packages/http-client-python/eng/scripts/ci/config/eslint-ci.config.mjs @@ -2,7 +2,11 @@ // Standalone eslint config for http-client-python package // This config is used in CI where monorepo dependencies may not be available import eslint from "@eslint/js"; +import { dirname } from "path"; import tsEslint from "typescript-eslint"; +import { fileURLToPath } from "url"; + +const root = dirname(dirname(dirname(dirname(fileURLToPath(import.meta.url))))); export default [ { @@ -11,6 +15,11 @@ export default [ eslint.configs.recommended, ...tsEslint.configs.recommended, { + languageOptions: { + parserOptions: { + tsconfigRootDir: root, + }, + }, rules: { // TypeScript plugin overrides "@typescript-eslint/no-non-null-assertion": "off", @@ -40,6 +49,7 @@ export default [ "no-case-declarations": "off", "no-ex-assign": "off", "no-undef": "off", + "no-useless-assignment": "error", "prefer-const": [ "warn", { diff --git a/packages/http-client-python/eng/scripts/ci/dev_requirements.txt b/packages/http-client-python/eng/scripts/ci/dev_requirements.txt index 2d713225106..5fc4a972cb0 100644 --- a/packages/http-client-python/eng/scripts/ci/dev_requirements.txt +++ b/packages/http-client-python/eng/scripts/ci/dev_requirements.txt @@ -7,6 +7,6 @@ colorama==0.4.6 debugpy==1.8.2 pytest==8.3.2 coverage==7.6.1 -black==24.8.0 +black==26.3.1 ptvsd==4.3.2 types-PyYAML==6.0.12.8 diff --git a/packages/http-client-python/eng/scripts/ci/lint.ts b/packages/http-client-python/eng/scripts/ci/lint.ts index 9d6150b9acc..9dad879a992 100644 --- a/packages/http-client-python/eng/scripts/ci/lint.ts +++ b/packages/http-client-python/eng/scripts/ci/lint.ts @@ -133,14 +133,21 @@ function runCommand( } async function lintEmitter(): Promise { - console.log(`\n${pc.bold("=== Linting TypeScript Emitter ===")}\n`); + console.log(`\n${pc.bold("=== Linting TypeScript ===")}\n`); // Run eslint with local config to avoid dependency on monorepo's eslint.config.js // This ensures the package can be linted in CI without full monorepo dependencies + // Lint both emitter/ and eng/scripts/ directories return runCommand( "eslint", - ["emitter/", "--config", "eng/scripts/ci/config/eslint-ci.config.mjs", "--max-warnings=0"], + [ + "emitter/", + "eng/scripts/", + "--config", + "eng/scripts/ci/config/eslint-ci.config.mjs", + "--max-warnings=0", + ], root, - "eslint emitter/ --max-warnings=0", + "eslint emitter/ eng/scripts/ --max-warnings=0", ); } diff --git a/packages/http-client-python/eng/scripts/ci/regenerate.ts b/packages/http-client-python/eng/scripts/ci/regenerate.ts index d49f5ac4d71..958c871849e 100644 --- a/packages/http-client-python/eng/scripts/ci/regenerate.ts +++ b/packages/http-client-python/eng/scripts/ci/regenerate.ts @@ -7,7 +7,8 @@ */ import { compile, NodeHost } from "@typespec/compiler"; -import { rmSync } from "fs"; +import { execSync } from "child_process"; +import { existsSync, readdirSync, rmSync } from "fs"; import { platform } from "os"; import { dirname, join, relative, resolve } from "path"; import pc from "picocolors"; @@ -173,6 +174,9 @@ function buildTaskGroups(specs: string[], flags: RegenerateFlags): TaskGroup[] { // Examples directory options["examples-dir"] = toPosix(join(dirname(spec), "examples")); + // Emit YAML only - Python processing is batched after all specs compile + options["emit-yaml-only"] = true; + tasks.push({ spec, outputDir, options }); } @@ -213,6 +217,25 @@ async function compileSpec(task: CompileTask): Promise<{ success: boolean; error } } +function renderProgressBar( + completed: number, + failed: number, + total: number, + width: number = 40, +): string { + const successCount = completed - failed; + const successWidth = Math.round((successCount / total) * width); + const failWidth = Math.round((failed / total) * width); + const emptyWidth = width - successWidth - failWidth; + + const successBar = pc.bgGreen(" ".repeat(successWidth)); + const failBar = failed > 0 ? pc.bgRed(" ".repeat(failWidth)) : ""; + const emptyBar = pc.dim("░".repeat(Math.max(0, emptyWidth))); + + const percent = Math.round((completed / total) * 100); + return `${successBar}${failBar}${emptyBar} ${pc.cyan(`${percent}%`)} (${completed}/${total})`; +} + async function runParallel(groups: TaskGroup[], maxJobs: number): Promise> { const results = new Map(); const executing: Set> = new Set(); @@ -220,6 +243,20 @@ async function runParallel(groups: TaskGroup[], maxJobs: number): Promise sum + g.tasks.length, 0); let completed = 0; + let failed = 0; + const failedSpecs: string[] = []; + + // Check if we're in a TTY for progress bar updates + const isTTY = process.stdout.isTTY; + + const updateProgress = () => { + if (isTTY) { + process.stdout.write(`\r${renderProgressBar(completed, failed, totalTasks)}`); + } + }; + + // Initial progress bar + updateProgress(); for (const group of groups) { // Each group runs as a unit - tasks within a group run sequentially @@ -232,19 +269,17 @@ async function runParallel(groups: TaskGroup[], maxJobs: number): Promise 0) { + console.log(pc.red(`\nFailed specs:`)); + for (const spec of failedSpecs) { + console.log(pc.red(` • ${spec}`)); + } + } + return results; } +function collectConfigFiles(generatedDir: string, flavor: string): string[] { + const flavorDir = join(generatedDir, "..", "tests", "generated", flavor); + if (!existsSync(flavorDir)) return []; + + const configFiles: string[] = []; + for (const pkg of readdirSync(flavorDir, { withFileTypes: true })) { + if (pkg.isDirectory()) { + const pkgDir = join(flavorDir, pkg.name); + // Find all .tsp-codegen-*.json files (supports multiple configs per output dir) + for (const file of readdirSync(pkgDir)) { + if (file.startsWith(".tsp-codegen-") && file.endsWith(".json")) { + configFiles.push(join(pkgDir, file)); + } + } + } + } + return configFiles; +} + +function runBatchPythonProcessing(flavor: string, configCount: number, jobs: number): boolean { + if (configCount === 0) return true; + + console.log(pc.cyan(`\nRunning batch Python processing on ${configCount} specs...`)); + + // Find Python venv + let venvPath = join(PLUGIN_DIR, "venv"); + if (existsSync(join(venvPath, "bin"))) { + venvPath = join(venvPath, "bin", "python"); + } else if (existsSync(join(venvPath, "Scripts"))) { + venvPath = join(venvPath, "Scripts", "python.exe"); + } else { + console.error(pc.red("Python venv not found")); + return false; + } + + const batchScript = join(PLUGIN_DIR, "eng", "scripts", "setup", "run_batch.py"); + + try { + // Pass directory and flavor instead of individual config files to avoid command line length limits on Windows + execSync( + `"${venvPath}" "${batchScript}" --generated-dir "${PLUGIN_DIR}" --flavor ${flavor} --jobs ${jobs}`, + { + stdio: "inherit", + cwd: PLUGIN_DIR, + }, + ); + return true; + } catch { + return false; + } +} + async function regenerateFlavor( flavor: string, name: string | undefined, @@ -289,21 +390,46 @@ async function regenerateFlavor( console.log(pc.cyan(`Found ${allSpecs.length} specs (${totalTasks} total tasks) to compile`)); console.log(pc.cyan(`Using ${jobs} parallel jobs\n`)); - // Run compilation + // Run compilation (emits YAML only) const startTime = performance.now(); const results = await runParallel(groups, jobs); - const duration = (performance.now() - startTime) / 1000; + const compileTime = (performance.now() - startTime) / 1000; - // Summary + // Summary for TypeSpec compilation const succeeded = Array.from(results.values()).filter((v) => v).length; - const failed = results.size - succeeded; + const compileFailed = results.size - succeeded; + + console.log( + pc.cyan( + `\nTypeSpec compilation: ${succeeded} succeeded, ${compileFailed} failed (${compileTime.toFixed(1)}s)`, + ), + ); + + if (compileFailed > 0) { + console.log(pc.red(`Skipping Python processing due to compilation failures`)); + return false; + } + + // Batch process all specs with Python + const pyStartTime = performance.now(); + const configCount = collectConfigFiles(GENERATED_FOLDER, flavor).length; + // Use fewer Python jobs since Python processing is heavier + const pyJobs = Math.max(4, jobs); + const pySuccess = runBatchPythonProcessing(flavor, configCount, pyJobs); + const pyTime = (performance.now() - pyStartTime) / 1000; + + const totalTime = (performance.now() - startTime) / 1000; console.log(pc.cyan(`\n${"=".repeat(60)}`)); - console.log(pc.cyan(`Results: ${succeeded} succeeded, ${failed} failed`)); - console.log(pc.cyan(`Time: ${duration.toFixed(1)}s`)); + console.log(pc.cyan(`Results: ${succeeded} specs processed`)); + console.log( + pc.cyan( + ` TypeSpec: ${compileTime.toFixed(1)}s | Python: ${pyTime.toFixed(1)}s | Total: ${totalTime.toFixed(1)}s`, + ), + ); console.log(pc.cyan(`${"=".repeat(60)}\n`)); - return failed === 0; + return pySuccess; } async function main() { diff --git a/packages/http-client-python/eng/scripts/ci/run-tests.ts b/packages/http-client-python/eng/scripts/ci/run-tests.ts index c5409113dff..969ff746346 100644 --- a/packages/http-client-python/eng/scripts/ci/run-tests.ts +++ b/packages/http-client-python/eng/scripts/ci/run-tests.ts @@ -35,7 +35,7 @@ Options: -f, --flavor SDK flavor to test (only applies to --generator) If not specified, tests both flavors --env Specific tox environments to run - Available: test, lint, mypy, pyright, docs, ci, unittest + Available: test, lint, mypy, pyright, apiview, sphinx, docs, ci, unittest -j, --jobs Number of parallel jobs (default: CPU cores - 2) -n, --name Filter tests by name pattern -q, --quiet Suppress test output (only show pass/fail summary) @@ -46,8 +46,10 @@ Environments (for --generator): lint Run pylint on generated packages mypy Run mypy type checking on generated packages pyright Run pyright type checking on generated packages - docs Run documentation validation (apiview, sphinx) - ci Run all checks (test + lint + mypy + pyright) + apiview Run apiview validation on generated packages + sphinx Run sphinx docstring validation on generated packages + docs Run apiview + sphinx (split into parallel envs) + ci Run all checks (test + lint + mypy + pyright + apiview + sphinx) unittest Run unit tests for pygen internals Examples: @@ -56,8 +58,8 @@ Examples: run-tests.ts --generator # Run generator tests for all flavors run-tests.ts --generator --flavor=azure # Run generator tests for azure only run-tests.ts -g -f azure --env=test # Run pytest for azure only - run-tests.ts -g --env=mypy # Run mypy for all flavors - run-tests.ts -g -f unbranded --env=lint # Run pylint for unbranded only + run-tests.ts -g --env=lint # Run pylint for all flavors + run-tests.ts -g -f unbranded --env=mypy # Run mypy for unbranded only `); process.exit(0); } @@ -80,7 +82,13 @@ interface ToxResult { error?: string; } -async function runToxEnv(env: string, pythonPath: string, name?: string): Promise { +interface RunningTask { + env: string; + proc: ChildProcess; + promise: Promise; +} + +function startToxEnv(env: string, pythonPath: string, name?: string): RunningTask { const startTime = Date.now(); const toxIniPath = join(testsDir, "tox.ini"); @@ -92,20 +100,20 @@ async function runToxEnv(env: string, pythonPath: string, name?: string): Promis console.log(`${pc.blue("[START]")} ${env}`); - return new Promise((resolve) => { - const proc: ChildProcess = spawn(pythonPath, args, { - cwd: testsDir, - stdio: !argv.values.quiet ? "inherit" : "pipe", - env: { ...process.env, FOLDER: env.split("-")[1] || "azure" }, - }); + const proc: ChildProcess = spawn(pythonPath, args, { + cwd: testsDir, + stdio: !argv.values.quiet ? "inherit" : "pipe", + env: { ...process.env, FOLDER: env.split("-")[1] || "azure" }, + }); - let stderr = ""; - if (argv.values.quiet && proc.stderr) { - proc.stderr.on("data", (data) => { - stderr += data.toString(); - }); - } + let stderr = ""; + if (argv.values.quiet && proc.stderr) { + proc.stderr.on("data", (data) => { + stderr += data.toString(); + }); + } + const promise = new Promise((resolve) => { proc.on("close", (code) => { const duration = (Date.now() - startTime) / 1000; const success = code === 0; @@ -135,6 +143,16 @@ async function runToxEnv(env: string, pythonPath: string, name?: string): Promis }); }); }); + + return { env, proc, promise }; +} + +function killTask(task: RunningTask): void { + try { + task.proc.kill("SIGTERM"); + } catch { + // Process may have already exited + } } async function runParallel( @@ -144,24 +162,51 @@ async function runParallel( name?: string, ): Promise { const results: ToxResult[] = []; - const running: Map> = new Map(); + const running: Map = new Map(); for (const env of envs) { // Wait if we're at max capacity - if (running.size >= maxJobs) { - const completed = await Promise.race(running.values()); + while (running.size >= maxJobs) { + const promises = Array.from(running.values()).map((t) => t.promise); + const completed = await Promise.race(promises); results.push(completed); running.delete(completed.env); + + // Fail-fast: kill all running tasks and exit + if (!completed.success) { + console.log(pc.red(`\n[FAIL-FAST] ${completed.env} failed, killing remaining tasks...`)); + for (const task of running.values()) { + killTask(task); + } + // Wait briefly for processes to terminate + await Promise.all(Array.from(running.values()).map((t) => t.promise)); + return results; + } } // Start new task - const task = runToxEnv(env, pythonPath, name); + const task = startToxEnv(env, pythonPath, name); running.set(env, task); } - // Wait for remaining tasks - const remaining = await Promise.all(running.values()); - results.push(...remaining); + // Wait for remaining tasks, checking for failures + while (running.size > 0) { + const promises = Array.from(running.values()).map((t) => t.promise); + const completed = await Promise.race(promises); + results.push(completed); + running.delete(completed.env); + + // Fail-fast: kill all running tasks and exit + if (!completed.success) { + console.log(pc.red(`\n[FAIL-FAST] ${completed.env} failed, killing remaining tasks...`)); + for (const task of running.values()) { + killTask(task); + } + // Wait briefly for processes to terminate + await Promise.all(Array.from(running.values()).map((t) => t.promise)); + return results; + } + } return results; } @@ -316,12 +361,15 @@ async function main(): Promise { baseEnvs = ["test"]; } - // Expand 'ci' into its component environments for parallel execution + // Expand 'ci' and 'docs' into component environments for parallel execution const expandedEnvs: string[] = []; for (const env of baseEnvs) { if (env === "ci") { - // Run test first (sequential), then lint/mypy/pyright/docs in parallel - expandedEnvs.push("test", "lint", "mypy", "pyright", "docs"); + // All envs run in parallel — pre-built wheels make installs cheap + expandedEnvs.push("test", "lint", "mypy", "pyright", "apiview", "sphinx"); + } else if (env === "docs") { + // Split docs into apiview + sphinx for parallelism + expandedEnvs.push("apiview", "sphinx"); } else { expandedEnvs.push(env); } @@ -348,37 +396,48 @@ async function main(): Promise { process.exit(1); } - // Separate test environments from other environments - // Test environments must run sequentially (they share port 3000) - // Other environments (lint, mypy, pyright, docs) can run in parallel - const testEnvs = envs.filter((e) => e.startsWith("test-") || e === "unittest"); - const otherEnvs = envs.filter((e) => !e.startsWith("test-") && e !== "unittest"); - const maxJobs = argv.values.jobs ? parseInt(argv.values.jobs, 10) : Math.max(2, cpus().length - 2); console.log(` Flavors: ${flavors.join(", ")}`); console.log(` Environments: ${envs.join(", ")}`); - console.log(` Jobs: ${maxJobs} (test envs run sequentially, others in parallel)`); + console.log(` Jobs: ${maxJobs}`); if (argv.values.name) { console.log(` Filter: ${argv.values.name}`); } console.log(); - // Run test environments first (sequentially) - let results: ToxResult[] = []; - if (testEnvs.length > 0) { - console.log(pc.cyan("Running test environments (sequential)...")); - results = await runParallel(testEnvs, pythonPath, 1, argv.values.name); + // Pre-build wheels for each flavor so tox envs install from pre-built + // wheels instead of rebuilding from source (~2min build once vs ~2min × N envs) + console.log(pc.cyan("Pre-building wheels for all flavors...")); + const installScript = join(testsDir, "install_packages.py"); + for (const flavor of flavors) { + const startTime = Date.now(); + const proc = spawn(pythonPath, [installScript, "build", flavor, testsDir], { + cwd: testsDir, + stdio: "inherit", + }); + await new Promise((resolve) => { + proc.on("close", (code) => { + const duration = ((Date.now() - startTime) / 1000).toFixed(1); + if (code === 0) { + console.log(`${pc.green("[PASS]")} wheel build ${flavor} (${duration}s)`); + } else { + console.log( + `${pc.yellow("[WARN]")} wheel build ${flavor} failed (${duration}s), tox envs will build from source`, + ); + } + resolve(); + }); + }); } + console.log(); - // Run other environments in parallel - if (otherEnvs.length > 0) { - console.log(pc.cyan("\nRunning lint/typecheck environments (parallel)...")); - const otherResults = await runParallel(otherEnvs, pythonPath, maxJobs, argv.values.name); - results = results.concat(otherResults); - } + // Run all environments in parallel + // The mock server serves both azure and unbranded specs, so all tests can run together + console.log(pc.cyan("Running all environments in parallel...")); + const results = await runParallel(envs, pythonPath, maxJobs, argv.values.name); allResults.push(...results); } diff --git a/packages/http-client-python/eng/scripts/ci/run_apiview.py b/packages/http-client-python/eng/scripts/ci/run_apiview.py index 5345f694ef0..48d6f890a34 100644 --- a/packages/http-client-python/eng/scripts/ci/run_apiview.py +++ b/packages/http-client-python/eng/scripts/ci/run_apiview.py @@ -10,32 +10,38 @@ import os import sys -from subprocess import check_call, CalledProcessError +from subprocess import run, TimeoutExpired import logging from util import run_check logging.getLogger().setLevel(logging.INFO) +# Timeout for each apiview generation (seconds) +APIVIEW_TIMEOUT = 30 + def _single_dir_apiview(mod): - loop = 0 - while True: + for attempt in range(2): try: - check_call( - [ - "apistubgen", - "--pkg-path", - str(mod.absolute()), - ] + result = run( + ["apistubgen", "--pkg-path", str(mod.absolute())], + capture_output=True, + timeout=APIVIEW_TIMEOUT, ) - except CalledProcessError as e: - if loop >= 2: # retry for maximum 3 times because sometimes the apistubgen has transient failure. - logging.error("{} exited with apiview generation error {}".format(mod.stem, e.returncode)) + if result.returncode == 0: + return True + if attempt == 1: + logging.error(f"{mod.stem} failed: {result.stderr.decode()[:200]}") + return False + except TimeoutExpired: + if attempt == 1: + logging.error(f"{mod.stem} timed out after {APIVIEW_TIMEOUT}s") + return False + except Exception as e: + if attempt == 1: + logging.error(f"{mod.stem} error: {e}") return False - else: - loop += 1 - continue - return True + return False if __name__ == "__main__": diff --git a/packages/http-client-python/eng/scripts/ci/run_mypy.py b/packages/http-client-python/eng/scripts/ci/run_mypy.py index 32443974ead..fb6210f72a2 100644 --- a/packages/http-client-python/eng/scripts/ci/run_mypy.py +++ b/packages/http-client-python/eng/scripts/ci/run_mypy.py @@ -12,7 +12,7 @@ import os import logging import sys -from util import run_check +from util import run_check, get_package_namespace_dir logging.getLogger().setLevel(logging.INFO) @@ -27,9 +27,10 @@ def get_config_file_location(): def _single_dir_mypy(mod): - # Exclude "build" directories to avoid mypy "Duplicate module" errors caused by - # stale build/lib/ artifacts from previous setup.py builds. - inner_class = next(d for d in mod.iterdir() if d.is_dir() and not str(d).endswith("egg-info") and d.stem != "build") + inner_class = get_package_namespace_dir(mod) + if not inner_class: + logging.info(f"No package directory found in {mod}, skipping") + return True try: check_call( [ diff --git a/packages/http-client-python/eng/scripts/ci/run_pylint.py b/packages/http-client-python/eng/scripts/ci/run_pylint.py index f090b8c913a..5a44df63e9c 100644 --- a/packages/http-client-python/eng/scripts/ci/run_pylint.py +++ b/packages/http-client-python/eng/scripts/ci/run_pylint.py @@ -12,7 +12,7 @@ import os import logging import sys -from util import run_check +from util import run_check, get_package_namespace_dir logging.getLogger().setLevel(logging.INFO) @@ -27,11 +27,10 @@ def get_rfc_file_location(): def _single_dir_pylint(mod): - # Exclude "build" directories created by pip install / setup.py build. - # Without this, "build" may be picked first alphabetically and pylint would - # lint stale build artifacts instead of the actual source, causing false - # positives (e.g. modules named "json", "xml", "datetime" shadow the stdlib). - inner_class = next(d for d in mod.iterdir() if d.is_dir() and not str(d).endswith("egg-info") and d.name != "build") + inner_class = get_package_namespace_dir(mod) + if not inner_class: + logging.info(f"No package directory found in {mod}, skipping") + return True # Only load the Azure pylint guidelines checker plugin for azure packages. # The plugin (azure-pylint-guidelines-checker) is only installed in the # lint-azure tox environment and is not available for unbranded packages. diff --git a/packages/http-client-python/eng/scripts/ci/run_pyright.py b/packages/http-client-python/eng/scripts/ci/run_pyright.py index a86ccc2b3cd..b998b515d22 100644 --- a/packages/http-client-python/eng/scripts/ci/run_pyright.py +++ b/packages/http-client-python/eng/scripts/ci/run_pyright.py @@ -13,7 +13,7 @@ import logging import sys import time -from util import run_check +from util import run_check, get_package_namespace_dir logging.getLogger().setLevel(logging.INFO) @@ -28,7 +28,10 @@ def get_pyright_config_file_location(): def _single_dir_pyright(mod): - inner_class = next(d for d in mod.iterdir() if d.is_dir() and not str(d).endswith("egg-info")) + inner_class = get_package_namespace_dir(mod) + if not inner_class: + logging.info(f"No package directory found in {mod}, skipping") + return True retries = 3 while retries: try: diff --git a/packages/http-client-python/eng/scripts/ci/run_sphinx_build.py b/packages/http-client-python/eng/scripts/ci/run_sphinx_build.py index e97476aed9e..0dba25b8f68 100644 --- a/packages/http-client-python/eng/scripts/ci/run_sphinx_build.py +++ b/packages/http-client-python/eng/scripts/ci/run_sphinx_build.py @@ -8,18 +8,21 @@ # This script is used to execute sphinx documentation build within a tox environment. # It uses a central sphinx configuration and validates docstrings by running sphinx-build. -from subprocess import check_call, CalledProcessError +from subprocess import run, TimeoutExpired import os import logging import sys from pathlib import Path -from util import run_check +from util import run_check, SKIP_PACKAGE_DIRS logging.getLogger().setLevel(logging.INFO) # Get the central Sphinx config directory SPHINX_CONF_DIR = os.path.abspath(os.path.dirname(__file__)) +# Timeout for each sphinx build (seconds) +SPHINX_TIMEOUT = 120 + def _create_minimal_index_rst(docs_dir, package_name, module_names): """Create a minimal index.rst file for sphinx to process.""" @@ -50,7 +53,12 @@ def _single_dir_sphinx(mod): # Find the actual Python package directories package_dirs = [ - d for d in mod.iterdir() if d.is_dir() and not d.name.startswith("_") and (d / "__init__.py").exists() + d + for d in mod.iterdir() + if d.is_dir() + and not d.name.startswith("_") + and d.name not in SKIP_PACKAGE_DIRS + and (d / "__init__.py").exists() ] if not package_dirs: @@ -85,7 +93,7 @@ def _single_dir_sphinx(mod): sys.path.insert(0, str(mod.absolute())) try: - result = check_call( + result = run( [ sys.executable, "-m", @@ -100,12 +108,19 @@ def _single_dir_sphinx(mod): "-q", # Quiet mode (only show warnings/errors) str(docs_dir.absolute()), # Source directory str(output_dir.absolute()), # Output directory - ] + ], + capture_output=True, + timeout=SPHINX_TIMEOUT, ) - logging.info(f"Sphinx build completed successfully for {mod.stem}") - return True - except CalledProcessError as e: - logging.error(f"{mod.stem} exited with sphinx build error {e.returncode}") + if result.returncode == 0: + return True + logging.error(f"{mod.stem} sphinx error: {result.stderr.decode()[:500]}") + return False + except TimeoutExpired: + logging.error(f"{mod.stem} timed out after {SPHINX_TIMEOUT}s") + return False + except Exception as e: + logging.error(f"{mod.stem} sphinx error: {e}") return False finally: # Remove from sys.path diff --git a/packages/http-client-python/eng/scripts/ci/util.py b/packages/http-client-python/eng/scripts/ci/util.py index 63a51c4d754..07aba0f3f49 100644 --- a/packages/http-client-python/eng/scripts/ci/util.py +++ b/packages/http-client-python/eng/scripts/ci/util.py @@ -8,14 +8,32 @@ import logging from pathlib import Path import argparse -from multiprocessing import Pool +from concurrent.futures import ProcessPoolExecutor, as_completed logging.getLogger().setLevel(logging.INFO) +# Root is the tests directory (4 levels up from this file: ci -> scripts -> eng -> package_root, then into tests) ROOT_FOLDER = os.path.abspath(os.path.join(os.path.abspath(__file__), "..", "..", "..", "..", "tests")) IGNORE_FOLDER = [] +# Directories inside each generated package that should be skipped by all CI checks. +# These are auto-generated test/sample scaffolding, not the actual SDK code. +SKIP_PACKAGE_DIRS = {"generated_tests", "generated_samples", "build", "__pycache__", ".pytest_cache"} + + +def get_package_namespace_dir(mod): + """Find the actual namespace directory inside a generated package, skipping non-SDK dirs.""" + for d in mod.iterdir(): + if ( + d.is_dir() + and not d.name.startswith("_") + and not d.name.endswith("egg-info") + and d.name not in SKIP_PACKAGE_DIRS + ): + return d + return None + def run_check(name, call_back, log_info): parser = argparse.ArgumentParser( @@ -25,16 +43,9 @@ def run_check(name, call_back, log_info): "-t", "--test-folder", dest="test_folder", - help="The test folder we're in. Can be 'azure' or 'vanilla'", + help="The test folder we're in. Can be 'azure' or 'unbranded'", required=True, ) - parser.add_argument( - "-g", - "--generator", - dest="generator", - help="The generator we're using. Can be 'legacy', 'version-tolerant'.", - required=False, - ) parser.add_argument( "-f", "--file-name", @@ -46,28 +57,52 @@ def run_check(name, call_back, log_info): "-s", "--subfolder", dest="subfolder", - help="The specific sub folder to validate, default to Expected/AcceptanceTests. Optional.", + help="The subfolder containing generated code, default to 'generated'.", required=False, - default="Expected/AcceptanceTests", + default="generated", + ) + parser.add_argument( + "-j", + "--jobs", + dest="jobs", + help="Number of parallel jobs (default: CPU count)", + type=int, + required=False, + default=max(1, os.cpu_count()), ) args = parser.parse_args() - pkg_dir = Path(ROOT_FOLDER) - if args.subfolder: - pkg_dir /= Path(args.subfolder) - pkg_dir /= Path(args.test_folder) - if args.generator: - pkg_dir /= Path(args.generator) + # Path structure: tests/generated/{test_folder}/ + pkg_dir = Path(ROOT_FOLDER) / Path(args.subfolder) / Path(args.test_folder) dirs = [d for d in pkg_dir.iterdir() if d.is_dir() and not d.stem.startswith("_") and d.stem not in IGNORE_FOLDER] if args.file_name: dirs = [d for d in dirs if args.file_name.lower() in d.stem.lower()] - if len(dirs) > 1: - with Pool() as pool: - result = pool.map(call_back, dirs) - response = all(result) - else: - response = call_back(dirs[0]) - if not response: - logging.error("%s fails", log_info) + + if not dirs: + logging.info("No directories to process") + return + + logging.info(f"Processing {len(dirs)} packages with {args.jobs} parallel jobs...") + + failed = [] + succeeded = 0 + + with ProcessPoolExecutor(max_workers=args.jobs) as executor: + futures = {executor.submit(call_back, d): d for d in dirs} + for future in as_completed(futures): + pkg = futures[future] + try: + if future.result(): + succeeded += 1 + else: + failed.append(pkg.stem) + except Exception as e: + logging.error(f"{pkg.stem} raised exception: {e}") + failed.append(pkg.stem) + + logging.info(f"{log_info}: {succeeded} succeeded, {len(failed)} failed") + + if failed: + logging.error(f"{log_info} failed for: {', '.join(failed)}") exit(1) diff --git a/packages/http-client-python/eng/scripts/setup/run_batch.py b/packages/http-client-python/eng/scripts/setup/run_batch.py new file mode 100644 index 00000000000..9548962b417 --- /dev/null +++ b/packages/http-client-python/eng/scripts/setup/run_batch.py @@ -0,0 +1,186 @@ +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +""" +Batch process multiple TypeSpec YAML files in a single Python process. +This avoids the overhead of spawning a new Python process for each spec. +""" + +import argparse +import json +import sys +import os +from pathlib import Path +from concurrent.futures import ProcessPoolExecutor, as_completed +from multiprocessing import freeze_support + +# Add the generator to the path +_ROOT_DIR = Path(__file__).parent.parent.parent.parent +sys.path.insert(0, str(_ROOT_DIR / "generator")) + + +def process_single_spec(config_path_str: str) -> tuple[str, bool, str]: + """Process a single spec from its config file. + + Returns: (output_dir, success, error_message) + """ + # Import inside function for multiprocessing compatibility + from pygen import preprocess, codegen + + config_path = Path(config_path_str) + try: + with open(config_path, "r", encoding="utf-8") as f: + config = json.load(f) + + yaml_path = config["yamlPath"] + command_args = config["commandArgs"] + output_dir = config["outputDir"] + + # Pass command args directly to pygen - pygen expects hyphenated keys + # Remove keys that shouldn't be passed to pygen + pygen_args = {k: v for k, v in command_args.items() if k not in ["emit-yaml-only"]} + + # Run preprocess and codegen (black is batched at the end for performance) + preprocess.PreProcessPlugin(output_folder=output_dir, tsp_file=yaml_path, **pygen_args).process() + + codegen.CodeGenerator(output_folder=output_dir, tsp_file=yaml_path, **pygen_args).process() + + # Clean up the config file + config_path.unlink() + + return (output_dir, True, "") + except Exception as e: + return (str(config_path), False, str(e)) + + +def render_progress_bar(completed: int, failed: int, total: int, width: int = 40) -> str: + """Render a progress bar with green for success and red for failures.""" + success_count = completed - failed + success_width = round((success_count / total) * width) if total > 0 else 0 + fail_width = round((failed / total) * width) if total > 0 else 0 + empty_width = width - success_width - fail_width + + # ANSI color codes + green_bg = "\033[42m" + red_bg = "\033[41m" + reset = "\033[0m" + dim = "\033[2m" + cyan = "\033[36m" + + success_bar = f"{green_bg}{' ' * success_width}{reset}" + fail_bar = f"{red_bg}{' ' * fail_width}{reset}" if failed > 0 else "" + empty_bar = f"{dim}{'░' * max(0, empty_width)}{reset}" + + percent = round((completed / total) * 100) if total > 0 else 0 + return f"{success_bar}{fail_bar}{empty_bar} {cyan}{percent}%{reset} ({completed}/{total})" + + +def collect_config_files(generated_dir: str, flavor: str) -> list[str]: + """Collect all .tsp-codegen-*.json config files from the generated directory.""" + flavor_dir = Path(generated_dir) / "tests" / "generated" / flavor + if not flavor_dir.exists(): + return [] + + config_files = [] + for pkg_dir in flavor_dir.iterdir(): + if pkg_dir.is_dir(): + for f in pkg_dir.iterdir(): + if f.name.startswith(".tsp-codegen-") and f.name.endswith(".json"): + config_files.append(str(f)) + return config_files + + +def main(): + parser = argparse.ArgumentParser(description="Batch process TypeSpec YAML files") + parser.add_argument( + "--generated-dir", + required=True, + help="Path to the generator directory (config files are in ../tests/generated//)", + ) + parser.add_argument( + "--flavor", + required=True, + help="Flavor to process (azure or unbranded)", + ) + parser.add_argument( + "--jobs", + type=int, + default=4, + help="Number of parallel jobs (default: 4)", + ) + args = parser.parse_args() + + # Discover config files from the generated directory + config_files = collect_config_files(args.generated_dir, args.flavor) + total = len(config_files) + + if total == 0: + print("No config files found, nothing to process") + return + + print(f"Processing {total} specs with {args.jobs} parallel jobs...") + + succeeded = 0 + failed = 0 + failed_specs = [] + output_dirs = [] + is_tty = sys.stdout.isatty() + + def update_progress(): + if is_tty: + sys.stdout.write(f"\r{render_progress_bar(succeeded + failed, failed, total)}") + sys.stdout.flush() + + # Initial progress bar + update_progress() + + # Use ProcessPoolExecutor for true parallelism (bypasses GIL) + with ProcessPoolExecutor(max_workers=args.jobs) as executor: + futures = {executor.submit(process_single_spec, cf): cf for cf in config_files} + + for future in as_completed(futures): + output_dir, success, error = future.result() + if success: + succeeded += 1 + output_dirs.append(output_dir) + else: + failed += 1 + failed_specs.append(f"{output_dir}: {error}") + # Fail-fast: cancel pending futures on first failure + print(f"\n\033[31m[FAIL-FAST] Cancelling remaining tasks after failure...\033[0m") + for f in futures: + f.cancel() + break + update_progress() + + # Clear progress bar line + if is_tty: + sys.stdout.write("\r" + " " * 60 + "\r") + sys.stdout.flush() + + # Print failures at the end + if failed_specs: + print("\n\033[31mFailed specs:\033[0m") + for spec in failed_specs: + print(f" \033[31m•\033[0m {spec}") + + print(f"\nBatch processing complete: {succeeded} succeeded, {failed} failed") + + if failed > 0: + sys.exit(1) + + # Run black formatting after all codegen completes. Running black separately + # avoids duplicating black's import/startup cost in each worker process. + if output_dirs: + from pygen.black import BlackScriptPlugin + + print(f"Formatting {len(output_dirs)} packages with black...") + for d in output_dirs: + BlackScriptPlugin(output_folder=d).process() + + +if __name__ == "__main__": + freeze_support() # Required for Windows multiprocessing + main() diff --git a/packages/http-client-python/eng/scripts/setup/venvtools.py b/packages/http-client-python/eng/scripts/setup/venvtools.py index c4afe0bcf84..15a89fbcaa6 100644 --- a/packages/http-client-python/eng/scripts/setup/venvtools.py +++ b/packages/http-client-python/eng/scripts/setup/venvtools.py @@ -8,7 +8,6 @@ import sys from pathlib import Path - # eng/scripts/setup/venvtools.py -> need to go up 4 levels to get to package root _ROOT_DIR = Path(__file__).parent.parent.parent.parent diff --git a/packages/http-client-python/generator/pygen/__init__.py b/packages/http-client-python/generator/pygen/__init__.py index ceb67faa88e..91cfab71f69 100644 --- a/packages/http-client-python/generator/pygen/__init__.py +++ b/packages/http-client-python/generator/pygen/__init__.py @@ -16,7 +16,6 @@ from ._version import VERSION - __version__ = VERSION _LOGGER = logging.getLogger(__name__) diff --git a/packages/http-client-python/generator/pygen/codegen/__init__.py b/packages/http-client-python/generator/pygen/codegen/__init__.py index fc8931c83a4..74e3031729c 100644 --- a/packages/http-client-python/generator/pygen/codegen/__init__.py +++ b/packages/http-client-python/generator/pygen/codegen/__init__.py @@ -13,7 +13,6 @@ from .models.code_model import CodeModel from .serializers import JinjaSerializer - _LOGGER = logging.getLogger(__name__) diff --git a/packages/http-client-python/generator/pygen/codegen/models/base.py b/packages/http-client-python/generator/pygen/codegen/models/base.py index a27a65b0a72..0921790670d 100644 --- a/packages/http-client-python/generator/pygen/codegen/models/base.py +++ b/packages/http-client-python/generator/pygen/codegen/models/base.py @@ -7,7 +7,6 @@ from abc import ABC, abstractmethod from .imports import FileImport - if TYPE_CHECKING: from .code_model import CodeModel from .model_type import ModelType diff --git a/packages/http-client-python/generator/pygen/codegen/models/enum_type.py b/packages/http-client-python/generator/pygen/codegen/models/enum_type.py index 8a4171a0337..9cbec3d1b30 100644 --- a/packages/http-client-python/generator/pygen/codegen/models/enum_type.py +++ b/packages/http-client-python/generator/pygen/codegen/models/enum_type.py @@ -10,7 +10,6 @@ from .utils import NamespaceType, add_to_pylint_disable from ...utils import NAME_LENGTH_LIMIT - if TYPE_CHECKING: from .code_model import CodeModel diff --git a/packages/http-client-python/generator/pygen/codegen/serializers/__init__.py b/packages/http-client-python/generator/pygen/codegen/serializers/__init__.py index 018605575c9..a95b1fd2f27 100644 --- a/packages/http-client-python/generator/pygen/codegen/serializers/__init__.py +++ b/packages/http-client-python/generator/pygen/codegen/serializers/__init__.py @@ -180,6 +180,15 @@ def serialize(self) -> None: elif client_namespace_type.clients: # add clients folder if there are clients in this namespace self._serialize_client_and_config_files(client_namespace, client_namespace_type.clients, env) + # When generation-subdir is configured, generated code goes into a subdirectory + # (e.g., _generated/). We also need an __init__.py in the parent namespace dir + # so that the package is discoverable by find_packages() / pip install. + if self.code_model.options.get("generation-subdir"): + root_dir = self.code_model.get_root_dir() + self.write_file( + root_dir / Path("__init__.py"), + general_serializer.serialize_pkgutil_init_file(), + ) else: # add pkgutil init file if no clients in this namespace self.write_file( @@ -242,7 +251,7 @@ def _serialize_and_write_package_files(self) -> None: lstrip_blocks=True, ) - package_files = _PACKAGE_FILES + package_files = list(_PACKAGE_FILES) # Copy to avoid modifying global if not self.code_model.license_description: package_files.remove("LICENSE.jinja2") elif Path(self.code_model.options["package-mode"]).exists(): diff --git a/packages/http-client-python/generator/pygen/codegen/serializers/general_serializer.py b/packages/http-client-python/generator/pygen/codegen/serializers/general_serializer.py index 54fe489920d..ce907c55427 100644 --- a/packages/http-client-python/generator/pygen/codegen/serializers/general_serializer.py +++ b/packages/http-client-python/generator/pygen/codegen/serializers/general_serializer.py @@ -155,7 +155,10 @@ def serialize_package_file(self, template_name: str, file_content: str, **kwargs "VERSION_MAP": VERSION_MAP, "MIN_PYTHON_VERSION": MIN_PYTHON_VERSION, "MAX_PYTHON_VERSION": MAX_PYTHON_VERSION, - "ADDITIONAL_DEPENDENCIES": [f"{item[0]}>={item[1]}" for item in additional_version_map.items()], + "ADDITIONAL_DEPENDENCIES": [ + dep if dep.startswith('"') else f'"{dep}"' + for dep in (f"{item[0]}>={item[1]}" for item in additional_version_map.items()) + ], } params |= {"options": self.code_model.options} params |= kwargs diff --git a/packages/http-client-python/generator/pygen/codegen/templates/packaging_templates/pyproject.toml.jinja2 b/packages/http-client-python/generator/pygen/codegen/templates/packaging_templates/pyproject.toml.jinja2 index 5cc41d91cb9..4a012849501 100644 --- a/packages/http-client-python/generator/pygen/codegen/templates/packaging_templates/pyproject.toml.jinja2 +++ b/packages/http-client-python/generator/pygen/codegen/templates/packaging_templates/pyproject.toml.jinja2 @@ -57,7 +57,7 @@ dependencies = [ {% endfor %} {% endif %} {% for dep in ADDITIONAL_DEPENDENCIES %} - "{{ dep }}", + {{ dep }}, {% endfor %} ] dynamic = [ diff --git a/packages/http-client-python/generator/setup.py b/packages/http-client-python/generator/setup.py index ad746f103a1..99996b606f3 100644 --- a/packages/http-client-python/generator/setup.py +++ b/packages/http-client-python/generator/setup.py @@ -47,7 +47,7 @@ ] ), install_requires=[ - "black==24.8.0", + "black==26.3.1", "docutils>=0.20.1", "Jinja2==3.1.6", "PyYAML==6.0.1", diff --git a/packages/http-client-python/package.json b/packages/http-client-python/package.json index 1b911555093..644950c2daf 100644 --- a/packages/http-client-python/package.json +++ b/packages/http-client-python/package.json @@ -46,7 +46,7 @@ "typecheck": "tsx ./eng/scripts/ci/typecheck.ts", "typecheck:generated": "tsx ./eng/scripts/ci/typecheck.ts --generated", "regenerate": "tsx ./eng/scripts/ci/regenerate.ts", - "ci": "npm run test && npm run lint && npm run typecheck", + "ci": "npm run test:emitter && npm run ci:generated", "ci:generated": "tsx ./eng/scripts/ci/run-tests.ts --generator --env=ci", "change:version": "pnpm chronus version --ignore-policies --only @typespec/http-client-python", "change:add": "pnpm chronus add", diff --git a/packages/http-client-python/tests/conftest.py b/packages/http-client-python/tests/conftest.py index d3780e9e216..a0badda9ef7 100644 --- a/packages/http-client-python/tests/conftest.py +++ b/packages/http-client-python/tests/conftest.py @@ -28,10 +28,6 @@ LOCK_FILE = Path(tempfile.gettempdir()) / "http_client_python_test_server.lock" PID_FILE = Path(tempfile.gettempdir()) / "http_client_python_test_server.pid" -# Global server process reference (used by hooks) -_server_process = None -_owns_server = False # Track if this process started the server - def wait_for_server(url: str, timeout: int = 60, interval: float = 0.5) -> bool: """Wait for the server to be ready by polling the URL.""" @@ -50,34 +46,29 @@ def wait_for_server(url: str, timeout: int = 60, interval: float = 0.5) -> bool: def start_server_process(): - """Start the tsp-spector mock API server.""" + """Start the tsp-spector mock API server. + + Always serves both azure-http-specs and http-specs regardless of flavor. + This allows azure and unbranded tests to run in parallel using the same server. + """ azure_http_path = ROOT / "node_modules/@azure-tools/azure-http-specs" http_path = ROOT / "node_modules/@typespec/http-specs" - # Determine flavor from environment or current directory - flavor = os.environ.get("FLAVOR", "azure") - + # Always serve both spec sets so azure and unbranded tests can run in parallel # Use absolute paths with forward slashes (works on all platforms including Windows) - if flavor == "unbranded": - cwd = http_path.resolve() - specs_path = str(cwd / "specs").replace("\\", "/") - cmd = f"npx tsp-spector serve {specs_path}" - else: - cwd = azure_http_path.resolve() - azure_specs = str(cwd / "specs").replace("\\", "/") - http_specs = str((http_path / "specs").resolve()).replace("\\", "/") - cmd = f"npx tsp-spector serve {azure_specs} {http_specs}" + cwd = azure_http_path.resolve() + azure_specs = str(cwd / "specs").replace("\\", "/") + http_specs = str((http_path / "specs").resolve()).replace("\\", "/") + cmd = f"npx tsp-spector serve {azure_specs} {http_specs}" # Add node_modules/.bin to PATH env = os.environ.copy() node_bin = str(ROOT / "node_modules" / ".bin") env["PATH"] = f"{node_bin}{os.pathsep}{env.get('PATH', '')}" - # Suppress server stdout/stderr to avoid confusing "Request validation failed" warnings - # in test output. Server readiness is validated via HTTP polling in wait_for_server(). if os.name == "nt": - return subprocess.Popen(cmd, shell=True, cwd=str(cwd), env=env, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) - return subprocess.Popen(cmd, shell=True, cwd=str(cwd), env=env, preexec_fn=os.setsid, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) + return subprocess.Popen(cmd, shell=True, cwd=str(cwd), env=env) + return subprocess.Popen(cmd, shell=True, cwd=str(cwd), env=env, preexec_fn=os.setsid) def terminate_server_process(process): @@ -105,91 +96,35 @@ def terminate_server_process(process): pass -def pytest_configure(config): - """Start the mock server before any tests run. - - Uses file locking to ensure only one process starts the server, - even when running with pytest-xdist. The controller process starts - the server and workers wait for it to be ready. - """ - global _server_process, _owns_server - - # Check if server is already running (e.g., from a previous run or external process) - if wait_for_server(SERVER_URL, timeout=1, interval=0.1): - print(f"Mock API server already running at {SERVER_URL}") - return - - # Use file lock to ensure only one process starts the server - # This handles both xdist workers and multiple test runs - lock = FileLock(str(LOCK_FILE), timeout=120) - - try: - with lock: - # Double-check after acquiring lock (another process may have started it) - if wait_for_server(SERVER_URL, timeout=1, interval=0.1): - print(f"Mock API server already running at {SERVER_URL}") - return - - # We're the first process - start the server - print(f"Starting mock API server...") - _server_process = start_server_process() - _owns_server = True - - # Check if process started successfully - if _server_process.poll() is not None: - pytest.exit(f"Mock API server process exited immediately with code {_server_process.returncode}") - - # Write PID file so other processes know who owns the server - PID_FILE.write_text(str(_server_process.pid)) - - # Wait for server to be ready - if not wait_for_server(SERVER_URL, timeout=60): - if _server_process.poll() is not None: - pytest.exit(f"Mock API server process died with code {_server_process.returncode}") - terminate_server_process(_server_process) - _server_process = None - _owns_server = False - pytest.exit(f"Mock API server failed to start within 60 seconds at {SERVER_URL}") - - print(f"Mock API server ready at {SERVER_URL}") - - except TimeoutError: - # Another process is holding the lock for too long - # Check if server is available anyway - if wait_for_server(SERVER_URL, timeout=5): - print(f"Mock API server available at {SERVER_URL} (started by another process)") - else: - pytest.exit("Timeout waiting for server lock - another process may be stuck") - - -def pytest_unconfigure(config): - """Stop the mock server after all tests complete.""" - global _server_process, _owns_server - - # Only stop the server if this process started it - if not _owns_server: - return - - terminate_server_process(_server_process) - _server_process = None - _owns_server = False - - # Clean up PID file - try: - PID_FILE.unlink(missing_ok=True) - except Exception: - pass - - @pytest.fixture(scope="session", autouse=True) -def testserver(request): - """Ensure the mock server is ready before tests run. +def testserver(): + """Start the mock API server, coordinated across xdist workers via file lock. - The server is started in pytest_configure (controller process). - This fixture just verifies the server is accessible from workers. + The first process to acquire the lock starts the server; others wait for it. + The server is intentionally NOT killed in teardown — with xdist, the owning + worker may finish before others, killing the server prematurely. The server + is cleaned up when the tox/parent process exits. """ + # Check if server is already running + if not wait_for_server(SERVER_URL, timeout=1, interval=0.1): + lock = FileLock(str(LOCK_FILE), timeout=120) + try: + with lock: + # Double-check after acquiring lock + if not wait_for_server(SERVER_URL, timeout=1, interval=0.1): + server = start_server_process() + PID_FILE.write_text(str(server.pid)) + if not wait_for_server(SERVER_URL, timeout=60): + terminate_server_process(server) + pytest.fail(f"Mock API server failed to start at {SERVER_URL}") + except TimeoutError: + if not wait_for_server(SERVER_URL, timeout=5): + pytest.fail("Timeout waiting for server lock") + + # Final check that server is reachable if not wait_for_server(SERVER_URL, timeout=30): pytest.fail(f"Mock API server not available at {SERVER_URL}") + yield diff --git a/packages/http-client-python/tests/install_packages.py b/packages/http-client-python/tests/install_packages.py index f706ae93732..ae7862e33fd 100644 --- a/packages/http-client-python/tests/install_packages.py +++ b/packages/http-client-python/tests/install_packages.py @@ -1,8 +1,12 @@ #!/usr/bin/env python """Install generated packages for testing. -This script handles cross-platform path issues that can occur with inline -tox commands on Windows. +Supports two modes: +1. Build wheels from source dirs into a wheel directory (build command) +2. Install from pre-built wheels via --find-links (instant, no compilation) + +The build step runs once before tox envs start. Each tox env then installs +from pre-built wheels, avoiding redundant source builds across environments. """ import glob @@ -11,57 +15,137 @@ import sys -def install_packages(flavor: str, tests_dir: str) -> None: +def _find_packages(generated_dir): + """Find all package directories that have pyproject.toml or setup.py.""" + all_dirs = glob.glob(os.path.join(generated_dir, "*")) + return sorted([ + p for p in all_dirs + if os.path.isdir(p) and ( + os.path.exists(os.path.join(p, "pyproject.toml")) or + os.path.exists(os.path.join(p, "setup.py")) + ) + ]) + + +def build_wheels(flavor, tests_dir): + """Build wheels for all packages into a shared directory.""" + generated_dir = os.path.join(tests_dir, "generated", flavor) + wheel_dir = os.path.join(tests_dir, ".wheels", flavor) + os.makedirs(wheel_dir, exist_ok=True) + + packages = _find_packages(generated_dir) + if not packages: + print(f"Warning: No packages found in {generated_dir}") + return + + print(f"Building {len(packages)} wheels for {flavor}...") + + batch_size = 50 + for i in range(0, len(packages), batch_size): + batch = packages[i:i + batch_size] + try: + subprocess.run( + ["uv", "pip", "wheel", "--no-deps", "--wheel-dir", wheel_dir] + batch, + check=True, + ) + except FileNotFoundError: + subprocess.run( + [sys.executable, "-m", "pip", "wheel", "--no-deps", "--wheel-dir", wheel_dir] + batch, + check=True, + ) + + wheel_count = len(glob.glob(os.path.join(wheel_dir, "*.whl"))) + print(f"Built {wheel_count} wheels for {flavor}") + + +def install_packages(flavor, tests_dir): """Install generated packages for the given flavor.""" generated_dir = os.path.join(tests_dir, "generated", flavor) + wheel_dir = os.path.join(tests_dir, ".wheels", flavor) if not os.path.exists(generated_dir): print(f"Warning: Generated directory does not exist: {generated_dir}") return - # Find all package directories - packages = glob.glob(os.path.join(generated_dir, "*")) - packages = [p for p in packages if os.path.isdir(p)] - + packages = _find_packages(generated_dir) if not packages: print(f"Warning: No packages found in {generated_dir}") return print(f"Installing {len(packages)} packages from {generated_dir}") - # Install packages using uv pip - # Use --no-deps to avoid dependency resolution overhead - cmd = ["uv", "pip", "install", "--no-deps"] + packages - - try: - subprocess.run(cmd, check=True) - print(f"Successfully installed {len(packages)} packages") - except subprocess.CalledProcessError as e: - print(f"Error installing packages: {e}") - sys.exit(1) - except FileNotFoundError: - # uv not found, try pip - print("uv not found, falling back to pip") - cmd = [sys.executable, "-m", "pip", "install", "--no-deps"] + packages - subprocess.run(cmd, check=True) + use_wheels = os.path.isdir(wheel_dir) and bool(glob.glob(os.path.join(wheel_dir, "*.whl"))) + use_uv = True + + if use_wheels: + # Install from pre-built wheels (instant, no compilation). + # Use wheel filenames to derive package specs since --no-index + # won't resolve source directory paths. + wheel_files = glob.glob(os.path.join(wheel_dir, "*.whl")) + print(f" Using {len(wheel_files)} pre-built wheels from .wheels/{flavor}/") + try: + cmd = ["uv", "pip", "install", "--no-deps", "--no-index", "--find-links", wheel_dir] + wheel_files + subprocess.run(cmd, check=True) + print(f"Successfully installed {len(wheel_files)} packages") + return + except FileNotFoundError: + use_uv = False + try: + cmd = [sys.executable, "-m", "pip", "install", "--no-deps", "--no-index", "--find-links", wheel_dir] + wheel_files + subprocess.run(cmd, check=True) + print(f"Successfully installed {len(wheel_files)} packages") + return + except subprocess.CalledProcessError: + print(" Wheel install failed, falling back to source install") + except subprocess.CalledProcessError: + print(" Wheel install failed, falling back to source install") + + # Fall back to source install with per-flavor cache + cache_dir = os.path.join(tests_dir, ".uv-cache", flavor) + batch_size = 50 + for i in range(0, len(packages), batch_size): + batch = packages[i:i + batch_size] + if use_uv: + cmd = ["uv", "pip", "install", "--no-deps", "--cache-dir", cache_dir] + batch + else: + cmd = [sys.executable, "-m", "pip", "install", "--no-deps"] + batch + try: + subprocess.run(cmd, check=True) + except subprocess.CalledProcessError as e: + print(f"Error installing packages: {e}") + sys.exit(1) + except FileNotFoundError: + if use_uv: + print("uv not found, falling back to pip") + use_uv = False + cmd = [sys.executable, "-m", "pip", "install", "--no-deps"] + batch + subprocess.run(cmd, check=True) + + print(f"Successfully installed {len(packages)} packages") def main(): if len(sys.argv) < 2: print("Usage: install_packages.py [tests_dir]") - print(" flavor: azure or unbranded") - print(" tests_dir: optional, defaults to script directory") + print(" install_packages.py build [tests_dir]") sys.exit(1) - flavor = sys.argv[1] - tests_dir = sys.argv[2] if len(sys.argv) > 2 else os.path.dirname(os.path.abspath(__file__)) - - if flavor not in ("azure", "unbranded"): - print(f"Error: Invalid flavor '{flavor}'. Must be 'azure' or 'unbranded'") + # Support both old-style (install_packages.py ) and new build command + if sys.argv[1] == "build": + if len(sys.argv) < 3: + print("Usage: install_packages.py build [tests_dir]") + sys.exit(1) + flavor = sys.argv[2] + tests_dir = sys.argv[3] if len(sys.argv) > 3 else os.path.dirname(os.path.abspath(__file__)) + build_wheels(flavor, tests_dir) + elif sys.argv[1] in ("azure", "unbranded"): + flavor = sys.argv[1] + tests_dir = sys.argv[2] if len(sys.argv) > 2 else os.path.dirname(os.path.abspath(__file__)) + install_packages(flavor, tests_dir) + else: + print(f"Error: Unknown command or flavor '{sys.argv[1]}'") sys.exit(1) - install_packages(flavor, tests_dir) - if __name__ == "__main__": main() diff --git a/packages/http-client-python/tests/mock_api/azure/conftest.py b/packages/http-client-python/tests/mock_api/azure/conftest.py index 29824951ef4..20d7152e5d4 100644 --- a/packages/http-client-python/tests/mock_api/azure/conftest.py +++ b/packages/http-client-python/tests/mock_api/azure/conftest.py @@ -3,9 +3,6 @@ # Licensed under the MIT License. See License.txt in the project root for # license information. # -------------------------------------------------------------------------- -import os -import subprocess -import signal import pytest import re from typing import Literal @@ -14,35 +11,6 @@ FILE_FOLDER = Path(__file__).parent -def start_server_process(): - azure_http_path = Path(os.path.dirname(__file__)) / Path("../../../node_modules/@azure-tools/azure-http-specs") - http_path = Path(os.path.dirname(__file__)) / Path("../../../node_modules/@typespec/http-specs") - os.chdir(azure_http_path.resolve()) - cmd = f"npx tsp-spector serve ./specs {(http_path / 'specs').resolve()}" - if os.name == "nt": - return subprocess.Popen(cmd, shell=True) - return subprocess.Popen(cmd, shell=True, preexec_fn=os.setsid) - - -def terminate_server_process(process): - try: - if os.name == "nt": - process.kill() - else: - os.killpg(os.getpgid(process.pid), signal.SIGTERM) # Send the signal to all the process groups - except ProcessLookupError: - # Process already terminated, which is fine - pass - - -@pytest.fixture(scope="session", autouse=True) -def testserver(): - """Start spector ranch mock api tests""" - server = start_server_process() - yield - terminate_server_process(server) - - _VALID_UUID = re.compile(r"^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$") _VALID_RFC7231 = re.compile( r"^(Mon|Tue|Wed|Thu|Fri|Sat|Sun),\s\d{2}\s" diff --git a/packages/http-client-python/tests/mock_api/shared/conftest.py b/packages/http-client-python/tests/mock_api/shared/conftest.py index 727f986ae44..0f5685c338c 100644 --- a/packages/http-client-python/tests/mock_api/shared/conftest.py +++ b/packages/http-client-python/tests/mock_api/shared/conftest.py @@ -3,9 +3,6 @@ # Licensed under the MIT License. See License.txt in the project root for # license information. # -------------------------------------------------------------------------- -import os -import subprocess -import signal import pytest import importlib from pathlib import Path @@ -13,39 +10,6 @@ DATA_FOLDER = Path(__file__).parent.parent -def start_server_process(): - azure_http_path = Path(os.path.dirname(__file__)) / Path("../../../node_modules/@azure-tools/azure-http-specs") - http_path = Path(os.path.dirname(__file__)) / Path("../../../node_modules/@typespec/http-specs") - if "unbranded" in Path(os.getcwd()).parts: - os.chdir(http_path.resolve()) - cmd = "npx tsp-spector serve ./specs" - else: - os.chdir(azure_http_path.resolve()) - cmd = f"npx tsp-spector serve ./specs {(http_path / 'specs').resolve()}" - if os.name == "nt": - return subprocess.Popen(cmd, shell=True) - return subprocess.Popen(cmd, shell=True, preexec_fn=os.setsid) - - -def terminate_server_process(process): - try: - if os.name == "nt": - process.kill() - else: - os.killpg(os.getpgid(process.pid), signal.SIGTERM) # Send the signal to all the process groups - except ProcessLookupError: - # Process already terminated, which is fine - pass - - -@pytest.fixture(scope="session", autouse=True) -def testserver(): - """Start spector mock api tests""" - server = start_server_process() - yield - terminate_server_process(server) - - """ Use to disambiguate the core library we use """ diff --git a/packages/http-client-python/tests/mock_api/unbranded/conftest.py b/packages/http-client-python/tests/mock_api/unbranded/conftest.py index 5d190ae0bd4..df7cb9efbea 100644 --- a/packages/http-client-python/tests/mock_api/unbranded/conftest.py +++ b/packages/http-client-python/tests/mock_api/unbranded/conftest.py @@ -3,44 +3,12 @@ # Licensed under the MIT License. See License.txt in the project root for # license information. # -------------------------------------------------------------------------- -import os -import subprocess -import signal import pytest -import re from pathlib import Path FILE_FOLDER = Path(__file__).parent -def start_server_process(): - http_path = Path(os.path.dirname(__file__)) / Path("../../../node_modules/@typespec/http-specs") - os.chdir(http_path.resolve()) - cmd = "tsp-spector serve ./specs" - if os.name == "nt": - return subprocess.Popen(cmd, shell=True) - return subprocess.Popen(cmd, shell=True, preexec_fn=os.setsid) - - -def terminate_server_process(process): - try: - if os.name == "nt": - process.kill() - else: - os.killpg(os.getpgid(process.pid), signal.SIGTERM) # Send the signal to all the process groups - except ProcessLookupError: - # Process already terminated, which is fine - pass - - -@pytest.fixture(scope="session", autouse=True) -def testserver(): - """Start spector mock api tests""" - server = start_server_process() - yield - terminate_server_process(server) - - SPECIAL_WORDS = [ "and", "as", diff --git a/packages/http-client-python/tests/mock_api/unbranded/test_unbranded.py b/packages/http-client-python/tests/mock_api/unbranded/test_unbranded.py index f7366edc0a5..60d537d3293 100644 --- a/packages/http-client-python/tests/mock_api/unbranded/test_unbranded.py +++ b/packages/http-client-python/tests/mock_api/unbranded/test_unbranded.py @@ -31,7 +31,7 @@ def test_track_back(client: ScalarClient): assert "microsoft" not in track_back -_SKIP_DIRS = {"__pycache__", "pytest_cache", ".pytest_cache"} +_SKIP_DIRS = {"__pycache__", "pytest_cache", ".pytest_cache", "generated_tests"} def check_sensitive_word(folder: Path, word: str) -> list[str]: diff --git a/packages/http-client-python/tests/requirements/docs.txt b/packages/http-client-python/tests/requirements/docs.txt index 7839a0e726f..5b80499ba34 100644 --- a/packages/http-client-python/tests/requirements/docs.txt +++ b/packages/http-client-python/tests/requirements/docs.txt @@ -1,6 +1,8 @@ # Documentation dependencies -r base.txt pip +pylint +pkginfo sphinx>=7.0.0 sphinx_rtd_theme>=2.0.0 myst_parser>=2.0.0 diff --git a/packages/http-client-python/tests/requirements/lint.txt b/packages/http-client-python/tests/requirements/lint.txt index 2a9896f8d75..736a7806543 100644 --- a/packages/http-client-python/tests/requirements/lint.txt +++ b/packages/http-client-python/tests/requirements/lint.txt @@ -1,4 +1,4 @@ # Linting dependencies -r base.txt pylint==4.0.4 -black==24.8.0 +black==26.3.1 diff --git a/packages/http-client-python/tests/tox.ini b/packages/http-client-python/tests/tox.ini index 12231969731..59734261e75 100644 --- a/packages/http-client-python/tests/tox.ini +++ b/packages/http-client-python/tests/tox.ini @@ -1,5 +1,5 @@ [tox] -envlist = test-{azure,unbranded}, lint-{azure,unbranded}, mypy-{azure,unbranded}, pyright-{azure,unbranded}, unittest +envlist = test-{azure,unbranded}, check-{azure,unbranded}, docs-{azure,unbranded}, unittest skipsdist = True isolated_build = True requires = tox-uv @@ -28,7 +28,7 @@ deps = -e {tox_root}/../generator commands = python {tox_root}/install_packages.py azure {tox_root} - pytest mock_api/azure mock_api/shared -v -n auto -n auto {posargs} + pytest mock_api/azure mock_api/shared -v -n auto {posargs} [testenv:test-unbranded] description = Run tests for unbranded flavor @@ -41,7 +41,7 @@ deps = -e {tox_root}/../generator commands = python {tox_root}/install_packages.py unbranded {tox_root} - pytest mock_api/unbranded mock_api/shared -v -n auto -n auto {posargs} + pytest mock_api/unbranded mock_api/shared -v -n auto {posargs} [testenv:unittest] description = Run unit tests for pygen internals @@ -54,11 +54,13 @@ commands = pytest unit/ -v -n auto {posargs} # ============================================================================= -# Lint environments +# Lint, type checking, and static analysis environments +# Split into separate envs for maximum parallelism. Pre-built wheels make +# per-env package installs cheap (~5s instead of ~2min). # ============================================================================= [testenv:lint-azure] -description = Run linting for Azure flavor +description = Run pylint for Azure flavor setenv = {[testenv]setenv} FLAVOR = azure @@ -67,12 +69,12 @@ deps = -r {tox_root}/requirements/azure.txt -e {tox_root}/../generator commands = - uv pip install azure-pylint-guidelines-checker==0.5.2 --index-url="https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/" + pip install azure-pylint-guidelines-checker==0.5.2 --extra-index-url="https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/" python {tox_root}/install_packages.py azure {tox_root} python {tox_root}/../eng/scripts/ci/run_pylint.py -t azure -s generated {posargs} [testenv:lint-unbranded] -description = Run linting for unbranded flavor +description = Run pylint for unbranded flavor setenv = {[testenv]setenv} FLAVOR = unbranded @@ -84,10 +86,6 @@ commands = python {tox_root}/install_packages.py unbranded {tox_root} python {tox_root}/../eng/scripts/ci/run_pylint.py -t unbranded -s generated {posargs} -# ============================================================================= -# Type checking environments (separate mypy and pyright) -# ============================================================================= - [testenv:mypy-azure] description = Run mypy type checking for Azure flavor setenv = @@ -95,6 +93,7 @@ setenv = FLAVOR = azure deps = -r {tox_root}/requirements/typecheck.txt + -r {tox_root}/requirements/azure.txt -e {tox_root}/../generator commands = python {tox_root}/install_packages.py azure {tox_root} @@ -107,6 +106,7 @@ setenv = FLAVOR = unbranded deps = -r {tox_root}/requirements/typecheck.txt + -r {tox_root}/requirements/unbranded.txt -e {tox_root}/../generator commands = python {tox_root}/install_packages.py unbranded {tox_root} @@ -119,6 +119,7 @@ setenv = FLAVOR = azure deps = -r {tox_root}/requirements/typecheck.txt + -r {tox_root}/requirements/azure.txt -e {tox_root}/../generator commands = python {tox_root}/install_packages.py azure {tox_root} @@ -131,17 +132,18 @@ setenv = FLAVOR = unbranded deps = -r {tox_root}/requirements/typecheck.txt + -r {tox_root}/requirements/unbranded.txt -e {tox_root}/../generator commands = python {tox_root}/install_packages.py unbranded {tox_root} python {tox_root}/../eng/scripts/ci/run_pyright.py -t unbranded -s generated {posargs} # ============================================================================= -# Documentation environments +# Documentation environments (apiview and sphinx split for parallelism) # ============================================================================= -[testenv:docs-azure] -description = Run documentation validation for Azure flavor +[testenv:apiview-azure] +description = Run apiview validation for Azure flavor basepython = python3.10 setenv = {[testenv]setenv} @@ -150,13 +152,13 @@ deps = -r {tox_root}/requirements/docs.txt -e {tox_root}/../generator commands = - uv pip install apiview-stub-generator>=0.3.19 --index-url="https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/" + pip install apiview-stub-generator>=0.3.19 pylint-guidelines-checker --no-deps --index-url="https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/" + pip install astroid charset-normalizer pylint pkginfo python {tox_root}/install_packages.py azure {tox_root} python {tox_root}/../eng/scripts/ci/run_apiview.py -t azure -s generated {posargs} - python {tox_root}/../eng/scripts/ci/run_sphinx_build.py -t azure -s generated {posargs} -[testenv:docs-unbranded] -description = Run documentation validation for unbranded flavor +[testenv:apiview-unbranded] +description = Run apiview validation for unbranded flavor basepython = python3.10 setenv = {[testenv]setenv} @@ -165,45 +167,35 @@ deps = -r {tox_root}/requirements/docs.txt -e {tox_root}/../generator commands = - uv pip install apiview-stub-generator>=0.3.19 --index-url="https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/" + pip install apiview-stub-generator>=0.3.19 pylint-guidelines-checker --no-deps --index-url="https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/" + pip install astroid charset-normalizer pylint pkginfo python {tox_root}/install_packages.py unbranded {tox_root} python {tox_root}/../eng/scripts/ci/run_apiview.py -t unbranded -s generated {posargs} - python {tox_root}/../eng/scripts/ci/run_sphinx_build.py -t unbranded -s generated {posargs} - -# ============================================================================= -# CI environments (combines all checks) -# ============================================================================= -[testenv:ci-azure] -description = Run full CI for Azure flavor +[testenv:sphinx-azure] +description = Run sphinx docstring validation for Azure flavor +basepython = python3.10 setenv = {[testenv]setenv} FLAVOR = azure deps = - -r {tox_root}/requirements/lint.txt - -r {tox_root}/requirements/typecheck.txt + -r {tox_root}/requirements/docs.txt -r {tox_root}/requirements/azure.txt -e {tox_root}/../generator commands = python {tox_root}/install_packages.py azure {tox_root} - pytest mock_api/azure mock_api/shared -v -n auto - python {tox_root}/../eng/scripts/ci/run_pylint.py -t azure -s generated - python {tox_root}/../eng/scripts/ci/run_mypy.py -t azure -s generated - python {tox_root}/../eng/scripts/ci/run_pyright.py -t azure -s generated + python {tox_root}/../eng/scripts/ci/run_sphinx_build.py -t azure -s generated {posargs} -[testenv:ci-unbranded] -description = Run full CI for unbranded flavor +[testenv:sphinx-unbranded] +description = Run sphinx docstring validation for unbranded flavor +basepython = python3.10 setenv = {[testenv]setenv} FLAVOR = unbranded deps = - -r {tox_root}/requirements/lint.txt - -r {tox_root}/requirements/typecheck.txt + -r {tox_root}/requirements/docs.txt -r {tox_root}/requirements/unbranded.txt -e {tox_root}/../generator commands = python {tox_root}/install_packages.py unbranded {tox_root} - pytest mock_api/unbranded mock_api/shared -v -n auto - python {tox_root}/../eng/scripts/ci/run_pylint.py -t unbranded -s generated - python {tox_root}/../eng/scripts/ci/run_mypy.py -t unbranded -s generated - python {tox_root}/../eng/scripts/ci/run_pyright.py -t unbranded -s generated + python {tox_root}/../eng/scripts/ci/run_sphinx_build.py -t unbranded -s generated {posargs}