Skip to content

feat: Add database backup and restore utilities#26607

Open
pmbrull wants to merge 5 commits intomainfrom
task/add-database-backup-and-restore-utilitie-3e452359
Open

feat: Add database backup and restore utilities#26607
pmbrull wants to merge 5 commits intomainfrom
task/add-database-backup-and-restore-utilitie-3e452359

Conversation

@pmbrull
Copy link
Collaborator

@pmbrull pmbrull commented Mar 19, 2026

Summary

  • Adds DatabaseBackupRestore utility class for full database backup and restore via JDBI
  • Adds backup and restore CLI commands to OpenMetadataOperations
  • Adds MigrationTestRunner with MigrationTestCase interface for migration validation
  • Adds test-migration CLI command that restores a backup, runs pending migrations one by one, and executes before/after test assertions
  • Supports both MySQL and PostgreSQL using information_schema for dynamic table/column discovery
  • Automatically skips GENERATED columns during backup (entity tables store derived columns from JSON)
  • Backup creates a .tar.gz archive with metadata.json (timestamp, version, db type, per-table info) and one JSON file per table
  • Restore validates database type match, supports --force flag to truncate existing data, and handles FK constraints
  • Migration test summary prints to stdout for easy PR attachment

Usage

# Backup
./openmetadata-ops.sh backup -c conf/openmetadata.yaml --backup-path /path/to/backup.tar.gz

# Restore (fails if tables have data)
./openmetadata-ops.sh restore -c conf/openmetadata.yaml --backup-path /path/to/backup.tar.gz

# Restore with force (truncates existing tables first)
./openmetadata-ops.sh restore -c conf/openmetadata.yaml --backup-path /path/to/backup.tar.gz --force

# Test migrations against a backup
./openmetadata-ops.sh test-migration -c conf/openmetadata.yaml --backup-path /path/to/backup.tar.gz

Migration Test Framework

Define test classes per migration version following the naming convention:

package org.openmetadata.service.migration.tests.v1120; // for version 1.12.0

public class MigrationTest implements MigrationTestCase {
  @Override
  public List<TestResult> validateBefore(Handle handle) {
    // assertions before migration runs
  }
  @Override
  public List<TestResult> validateAfter(Handle handle) {
    // assertions after migration runs
  }
}

The test-migration command outputs a summary table:

============================================================
              Migration Test Summary
============================================================
 Source version  : 1.11.0
 Target version  : 1.12.0
 Database type   : MYSQL
 Backup timestamp: 2026-03-19T14:30:00Z
------------------------------------------------------------
 Migration | Test                         | Phase  | Result
------------------------------------------------------------
 1.11.1    | (no tests)                   |   -    |   -
 1.11.2    | column X exists              | BEFORE |  PASS
 1.12.0    | config migrated              | AFTER  |  FAIL
           |   Expected 5 rows, got 3     |        |
------------------------------------------------------------
 Total: 1 passed, 1 failed
============================================================

Test plan

  • Unit tests for extractDatabaseName (4 tests)
  • Unit tests for versionToPackage (5 tests)
  • Manual test: backup against a local MySQL database, verify archive structure
  • Manual test: restore to an empty database, verify data integrity
  • Manual test: restore without --force on non-empty database fails with table listing
  • Manual test: restore with --force truncates and restores successfully
  • Manual test: test-migration with a sample MigrationTest class

pmbrull and others added 3 commits March 19, 2026 18:35
Introduces a new utility class that provides database backup and restore
functionality for OpenMetadata's CLI. It discovers all tables via
information_schema, filters out GENERATED columns, dumps everything to
a .tar.gz archive with metadata, and restores from such archives.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 19, 2026 17:42
@github-actions github-actions bot added Ingestion safe to test Add this label to run secure Github workflows on PRs labels Mar 19, 2026
Comment on lines +244 to +250
try {
jdbi.useHandle(this::disableForeignKeyChecks);
restoreTablesFromArchive(backupPath, tablesMetadata);
LOG.info("Restore completed successfully");
} finally {
jdbi.useHandle(this::enableForeignKeyChecks);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 Bug: FK checks disabled on separate handle, not effective for inserts

At line 245, jdbi.useHandle(this::disableForeignKeyChecks) disables FK checks on a handle that is immediately closed. Then restoreTablesFromArchive (line 246) calls jdbi.useHandle(handle -> insertRows(...)) at line 303, which obtains a different connection from the pool. SET FOREIGN_KEY_CHECKS = 0 (MySQL) and SET session_replication_role = 'replica' (Postgres) are session-level settings — they only apply to the connection they were executed on. Since each jdbi.useHandle() may obtain a different pooled connection, the inserts will run with FK checks still enabled, causing failures when tables are restored in an order that violates foreign key constraints.

The same issue affects the enableForeignKeyChecks call in the finally block (line 249) — it may re-enable checks on yet another connection that never had them disabled.

Suggested fix:

// Use a single handle for the entire restore operation:
jdbi.useHandle(handle -> {
  try {
    disableForeignKeyChecks(handle);
    restoreTablesFromArchive(handle, backupPath, tablesMetadata);
  } finally {
    enableForeignKeyChecks(handle);
  }
});
// Update restoreTablesFromArchive to accept a Handle
// and pass it to insertRows directly instead of
// calling jdbi.useHandle() at line 303.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

Comment on lines +397 to +402
String quoteIdentifier(String identifier) {
if (connectionType == ConnectionType.MYSQL) {
return "`" + identifier + "`";
}
return "\"" + identifier + "\"";
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Security: SQL injection via crafted backup archive table/column names

During restore, table names are extracted from tar entry paths (line 284) and metadata.json field names (line 233) — both sourced from the untrusted backup archive. These names flow into SQL statements via quoteIdentifier() (line 397-402), which wraps identifiers in backticks or double-quotes but does NOT escape those delimiter characters within the identifier itself. A crafted archive with a table name like foo DROP TABLE users; --could inject arbitrary SQL.quoteIdentifieris used in SELECT (line 314), TRUNCATE (line 336), and INSERT (line 350) statements. Column names frommetadata.jsonare similarly unvalidated (line 292, used in INSERT at line 347). While the attack requires someone to provide a malicious.tar.gz` file to the restore command, this is a realistic threat for a backup/restore utility — especially if backups are downloaded from remote storage.

Suggested fix:

// 1. Validate identifiers with a strict allowlist pattern:
private static final Pattern SAFE_IDENTIFIER =
    Pattern.compile("^[a-zA-Z_][a-zA-Z0-9_]*$");

String quoteIdentifier(String identifier) {
  if (!SAFE_IDENTIFIER.matcher(identifier).matches()) {
    throw new IllegalArgumentException(
        "Invalid identifier: " + identifier);
  }
  // existing quoting logic...
}
// 2. Optionally cross-check restored table names against
//    discoverTables() to ensure they exist in the DB.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

Comment on lines +119 to +127
jdbi.useHandle(
handle -> {
List<String> tables = discoverTables(handle);
LOG.info("Discovered {} tables", tables.size());

for (String tableName : tables) {
backupTable(handle, tableName, taos, tablesMetadata);
}
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Edge Case: Backup reads tables without snapshot isolation

The backup() method reads tables sequentially within a single jdbi.useHandle callback, but without setting a transaction isolation level (e.g., REPEATABLE READ or SERIALIZABLE). If the database is being written to during backup, the resulting archive may contain an inconsistent snapshot — e.g., a foreign key reference in table A pointing to a row in table B that wasn't captured because it was deleted between the two table reads.

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an ops-level database backup/restore capability to OpenMetadata, wiring new backup / restore CLI commands into OpenMetadataOperations and implementing archive-based export/import via JDBI for MySQL/PostgreSQL.

Changes:

  • Introduces DatabaseBackupRestore utility to write/read a .tar.gz backup with per-table JSON plus metadata.json.
  • Adds backup and restore Picocli commands to OpenMetadataOperations.
  • Adds unit tests for extractDatabaseName JDBC URL parsing.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.

File Description
openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java Implements table/column discovery, backup archive creation, and restore logic.
openmetadata-service/src/main/java/org/openmetadata/service/util/OpenMetadataOperations.java Wires backup / restore commands into the ops CLI.
openmetadata-service/src/test/java/org/openmetadata/service/util/DatabaseBackupRestoreTest.java Adds unit tests for JDBC database-name extraction helper.
openmetadata-service/pom.xml Adds commons-compress dependency for tar/gzip support.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +242 to +250
});

try {
jdbi.useHandle(this::disableForeignKeyChecks);
restoreTablesFromArchive(backupPath, tablesMetadata);
LOG.info("Restore completed successfully");
} finally {
jdbi.useHandle(this::enableForeignKeyChecks);
}
Comment on lines +294 to +296
byte[] content = tais.readAllBytes();
ArrayNode rows = (ArrayNode) MAPPER.readTree(content);

Comment on lines +153 to +160
int offset = 0;
ArrayNode allRows = MAPPER.createArrayNode();

while (true) {
String sql =
String.format(
"SELECT %s FROM %s LIMIT %d OFFSET %d",
quotedColumns, quotedTable, BATCH_SIZE, offset);
Comment on lines +360 to +368
JsonNode val = row.get(col);
if (val == null || val.isNull()) {
batch.bindNull(col, Types.VARCHAR);
} else if (val.isNumber()) {
if (val.isLong() || val.isInt() || val.isBigInteger()) {
batch.bind(col, val.longValue());
} else {
batch.bind(col, val.doubleValue());
}
Comment on lines +159 to +160
"SELECT %s FROM %s LIMIT %d OFFSET %d",
quotedColumns, quotedTable, BATCH_SIZE, offset);
Comment on lines +215 to +251
public void restore(String backupPath, boolean force) throws IOException {
LOG.info("Starting database restore from {}", backupPath);

ObjectNode metadata = readMetadata(backupPath);
String backupDbType = metadata.get("databaseType").asText();
if (!backupDbType.equals(connectionType.name())) {
throw new IllegalStateException(
String.format(
"Backup database type '%s' does not match current connection type '%s'",
backupDbType, connectionType.name()));
}

LOG.info(
"Backup info - version: {}, timestamp: {}, databaseType: {}",
metadata.get("version").asText(),
metadata.get("timestamp").asText(),
backupDbType);

ObjectNode tablesMetadata = (ObjectNode) metadata.get("tables");

jdbi.useHandle(
handle -> {
if (force) {
truncateAllTables(handle, tablesMetadata);
} else {
validateTablesEmpty(handle, tablesMetadata);
}
});

try {
jdbi.useHandle(this::disableForeignKeyChecks);
restoreTablesFromArchive(backupPath, tablesMetadata);
LOG.info("Restore completed successfully");
} finally {
jdbi.useHandle(this::enableForeignKeyChecks);
}
}
"SELECT column_name FROM information_schema.columns "
+ "WHERE table_schema = 'public' AND table_name = :table "
+ "AND (is_generated = 'NEVER' OR is_generated IS NULL) "
+ "AND (column_default NOT LIKE 'nextval%' OR column_default IS NULL) "
batch.bind(col, val.doubleValue());
}
} else if (val.isBoolean()) {
batch.bind(col, val.booleanValue());
Comment on lines +381 to +387
private void disableForeignKeyChecks(Handle handle) {
if (connectionType == ConnectionType.MYSQL) {
handle.execute("SET FOREIGN_KEY_CHECKS = 0");
} else {
handle.execute("SET session_replication_role = 'replica'");
}
}
pmbrull and others added 2 commits March 19, 2026 19:11
…mary output

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a test-migration command to OpenMetadataOperations that:
- Restores a backup, then runs pending migrations one by one
- For each migration with a test class, runs validateBefore/validateAfter
- Prints a summary table to stdout for PR validation
- Returns exit code 0 (all pass) or 1 (any fail)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
}
}

static String versionToPackage(String version) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Bug: versionToPackage produces collisions for different versions

The versionToPackage method simply concatenates the numeric parts of a version string without any separator or fixed-width formatting. This means different versions can map to the same package name, causing the wrong test class to be loaded (or no test to be found).

Examples of collisions:

  • "1.12.0""v1120" and "1.1.20""v1120"
  • "1.1.0""v110" and "1.10""v110"

If a future migration version happens to collide with an existing one, the test framework will silently run the wrong test class or fail to find the correct one.

Suggested fix:

static String versionToPackage(String version) {
  String base = version.contains("-") ? version.split("-")[0] : version;
  String[] parts = base.split("\.");
  StringBuilder sb = new StringBuilder("v");
  for (int i = 0; i < parts.length; i++) {
    if (i > 0) sb.append("_");
    sb.append(Integer.parseInt(parts[i]));
  }
  return sb.toString();
}
// "1.12.0" → "v1_12_0", "1.1.20" → "v1_1_20"

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

new MigrationTestEntry(version, "migration execution", "RUN", false, e.getMessage()));
}

if (testCase != null) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Edge Case: AFTER tests still run when migration fails

When a migration fails (lines 106-111), the migrationFailed flag is set but the code only checks testCase != null before running validateAfter (line 113). The else if (!migrationFailed) branch at line 115 only applies to the "no tests" placeholder — if a testCase exists, its validateAfter will still execute against a partially-migrated database before the break at line 119. This could produce confusing test results or exceptions from the after-validation running against an inconsistent schema.

Suggested fix:

if (testCase != null && !migrationFailed) {
  entries.addAll(runValidation(testCase::validateAfter, handle, version, "AFTER"));
} else if (!migrationFailed) {
  entries.add(new MigrationTestEntry(version, "(no tests)", "-", true, ""));
}

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

@gitar-bot
Copy link

gitar-bot bot commented Mar 19, 2026

Code Review 🚫 Blocked 0 resolved / 5 findings

Database backup and restore utilities implementation blocked due to critical security and data integrity issues: FK checks disabled on separate handle not effective for inserts, SQL injection vulnerability via crafted archive table/column names during restore, version collision risk in versionToPackage, and test execution continuing after migration failures.

🚨 Bug: FK checks disabled on separate handle, not effective for inserts

📄 openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:244-250 📄 openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:303

At line 245, jdbi.useHandle(this::disableForeignKeyChecks) disables FK checks on a handle that is immediately closed. Then restoreTablesFromArchive (line 246) calls jdbi.useHandle(handle -> insertRows(...)) at line 303, which obtains a different connection from the pool. SET FOREIGN_KEY_CHECKS = 0 (MySQL) and SET session_replication_role = 'replica' (Postgres) are session-level settings — they only apply to the connection they were executed on. Since each jdbi.useHandle() may obtain a different pooled connection, the inserts will run with FK checks still enabled, causing failures when tables are restored in an order that violates foreign key constraints.

The same issue affects the enableForeignKeyChecks call in the finally block (line 249) — it may re-enable checks on yet another connection that never had them disabled.

Suggested fix
// Use a single handle for the entire restore operation:
jdbi.useHandle(handle -> {
  try {
    disableForeignKeyChecks(handle);
    restoreTablesFromArchive(handle, backupPath, tablesMetadata);
  } finally {
    enableForeignKeyChecks(handle);
  }
});
// Update restoreTablesFromArchive to accept a Handle
// and pass it to insertRows directly instead of
// calling jdbi.useHandle() at line 303.
⚠️ Security: SQL injection via crafted backup archive table/column names

📄 openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:397-402 📄 openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:284 📄 openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:292

During restore, table names are extracted from tar entry paths (line 284) and metadata.json field names (line 233) — both sourced from the untrusted backup archive. These names flow into SQL statements via quoteIdentifier() (line 397-402), which wraps identifiers in backticks or double-quotes but does NOT escape those delimiter characters within the identifier itself. A crafted archive with a table name like foo DROP TABLE users; --could inject arbitrary SQL.quoteIdentifieris used in SELECT (line 314), TRUNCATE (line 336), and INSERT (line 350) statements. Column names frommetadata.jsonare similarly unvalidated (line 292, used in INSERT at line 347). While the attack requires someone to provide a malicious.tar.gz` file to the restore command, this is a realistic threat for a backup/restore utility — especially if backups are downloaded from remote storage.

Suggested fix
// 1. Validate identifiers with a strict allowlist pattern:
private static final Pattern SAFE_IDENTIFIER =
    Pattern.compile("^[a-zA-Z_][a-zA-Z0-9_]*$");

String quoteIdentifier(String identifier) {
  if (!SAFE_IDENTIFIER.matcher(identifier).matches()) {
    throw new IllegalArgumentException(
        "Invalid identifier: " + identifier);
  }
  // existing quoting logic...
}
// 2. Optionally cross-check restored table names against
//    discoverTables() to ensure they exist in the DB.
⚠️ Bug: versionToPackage produces collisions for different versions

📄 openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java:167

The versionToPackage method simply concatenates the numeric parts of a version string without any separator or fixed-width formatting. This means different versions can map to the same package name, causing the wrong test class to be loaded (or no test to be found).

Examples of collisions:

  • "1.12.0""v1120" and "1.1.20""v1120"
  • "1.1.0""v110" and "1.10""v110"

If a future migration version happens to collide with an existing one, the test framework will silently run the wrong test class or fail to find the correct one.

Suggested fix
static String versionToPackage(String version) {
  String base = version.contains("-") ? version.split("-")[0] : version;
  String[] parts = base.split("\.");
  StringBuilder sb = new StringBuilder("v");
  for (int i = 0; i < parts.length; i++) {
    if (i > 0) sb.append("_");
    sb.append(Integer.parseInt(parts[i]));
  }
  return sb.toString();
}
// "1.12.0" → "v1_12_0", "1.1.20" → "v1_1_20"
💡 Edge Case: Backup reads tables without snapshot isolation

📄 openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:119-127

The backup() method reads tables sequentially within a single jdbi.useHandle callback, but without setting a transaction isolation level (e.g., REPEATABLE READ or SERIALIZABLE). If the database is being written to during backup, the resulting archive may contain an inconsistent snapshot — e.g., a foreign key reference in table A pointing to a row in table B that wasn't captured because it was deleted between the two table reads.

💡 Edge Case: AFTER tests still run when migration fails

📄 openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java:113

When a migration fails (lines 106-111), the migrationFailed flag is set but the code only checks testCase != null before running validateAfter (line 113). The else if (!migrationFailed) branch at line 115 only applies to the "no tests" placeholder — if a testCase exists, its validateAfter will still execute against a partially-migrated database before the break at line 119. This could produce confusing test results or exceptions from the after-validation running against an inconsistent schema.

Suggested fix
if (testCase != null && !migrationFailed) {
  entries.addAll(runValidation(testCase::validateAfter, handle, version, "AFTER"));
} else if (!migrationFailed) {
  entries.add(new MigrationTestEntry(version, "(no tests)", "-", true, ""));
}
🤖 Prompt for agents
Code Review: Database backup and restore utilities implementation blocked due to critical security and data integrity issues: FK checks disabled on separate handle not effective for inserts, SQL injection vulnerability via crafted archive table/column names during restore, version collision risk in versionToPackage, and test execution continuing after migration failures.

1. 🚨 Bug: FK checks disabled on separate handle, not effective for inserts
   Files: openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:244-250, openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:303

   At line 245, `jdbi.useHandle(this::disableForeignKeyChecks)` disables FK checks on a handle that is immediately closed. Then `restoreTablesFromArchive` (line 246) calls `jdbi.useHandle(handle -> insertRows(...))` at line 303, which obtains a *different* connection from the pool. `SET FOREIGN_KEY_CHECKS = 0` (MySQL) and `SET session_replication_role = 'replica'` (Postgres) are session-level settings — they only apply to the connection they were executed on. Since each `jdbi.useHandle()` may obtain a different pooled connection, the inserts will run with FK checks still enabled, causing failures when tables are restored in an order that violates foreign key constraints.
   
   The same issue affects the `enableForeignKeyChecks` call in the finally block (line 249) — it may re-enable checks on yet another connection that never had them disabled.

   Suggested fix:
   // Use a single handle for the entire restore operation:
   jdbi.useHandle(handle -> {
     try {
       disableForeignKeyChecks(handle);
       restoreTablesFromArchive(handle, backupPath, tablesMetadata);
     } finally {
       enableForeignKeyChecks(handle);
     }
   });
   // Update restoreTablesFromArchive to accept a Handle
   // and pass it to insertRows directly instead of
   // calling jdbi.useHandle() at line 303.

2. ⚠️ Security: SQL injection via crafted backup archive table/column names
   Files: openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:397-402, openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:284, openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:292

   During restore, table names are extracted from tar entry paths (line 284) and `metadata.json` field names (line 233) — both sourced from the untrusted backup archive. These names flow into SQL statements via `quoteIdentifier()` (line 397-402), which wraps identifiers in backticks or double-quotes but does NOT escape those delimiter characters within the identifier itself. A crafted archive with a table name like `foo` DROP TABLE users; --` could inject arbitrary SQL.
   
   `quoteIdentifier` is used in SELECT (line 314), TRUNCATE (line 336), and INSERT (line 350) statements. Column names from `metadata.json` are similarly unvalidated (line 292, used in INSERT at line 347).
   
   While the attack requires someone to provide a malicious `.tar.gz` file to the restore command, this is a realistic threat for a backup/restore utility — especially if backups are downloaded from remote storage.

   Suggested fix:
   // 1. Validate identifiers with a strict allowlist pattern:
   private static final Pattern SAFE_IDENTIFIER =
       Pattern.compile("^[a-zA-Z_][a-zA-Z0-9_]*$");
   
   String quoteIdentifier(String identifier) {
     if (!SAFE_IDENTIFIER.matcher(identifier).matches()) {
       throw new IllegalArgumentException(
           "Invalid identifier: " + identifier);
     }
     // existing quoting logic...
   }
   // 2. Optionally cross-check restored table names against
   //    discoverTables() to ensure they exist in the DB.

3. 💡 Edge Case: Backup reads tables without snapshot isolation
   Files: openmetadata-service/src/main/java/org/openmetadata/service/util/DatabaseBackupRestore.java:119-127

   The `backup()` method reads tables sequentially within a single `jdbi.useHandle` callback, but without setting a transaction isolation level (e.g., REPEATABLE READ or SERIALIZABLE). If the database is being written to during backup, the resulting archive may contain an inconsistent snapshot — e.g., a foreign key reference in table A pointing to a row in table B that wasn't captured because it was deleted between the two table reads.

4. ⚠️ Bug: versionToPackage produces collisions for different versions
   Files: openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java:167

   The `versionToPackage` method simply concatenates the numeric parts of a version string without any separator or fixed-width formatting. This means different versions can map to the same package name, causing the wrong test class to be loaded (or no test to be found).
   
   Examples of collisions:
   - `"1.12.0"` → `"v1120"` and `"1.1.20"` → `"v1120"`
   - `"1.1.0"` → `"v110"` and `"1.10"` → `"v110"`
   
   If a future migration version happens to collide with an existing one, the test framework will silently run the wrong test class or fail to find the correct one.

   Suggested fix:
   static String versionToPackage(String version) {
     String base = version.contains("-") ? version.split("-")[0] : version;
     String[] parts = base.split("\.");
     StringBuilder sb = new StringBuilder("v");
     for (int i = 0; i < parts.length; i++) {
       if (i > 0) sb.append("_");
       sb.append(Integer.parseInt(parts[i]));
     }
     return sb.toString();
   }
   // "1.12.0" → "v1_12_0", "1.1.20" → "v1_1_20"

5. 💡 Edge Case: AFTER tests still run when migration fails
   Files: openmetadata-service/src/main/java/org/openmetadata/service/util/MigrationTestRunner.java:113

   When a migration fails (lines 106-111), the `migrationFailed` flag is set but the code only checks `testCase != null` before running `validateAfter` (line 113). The `else if (!migrationFailed)` branch at line 115 only applies to the "no tests" placeholder — if a `testCase` exists, its `validateAfter` will still execute against a partially-migrated database before the `break` at line 119. This could produce confusing test results or exceptions from the after-validation running against an inconsistent schema.

   Suggested fix:
   if (testCase != null && !migrationFailed) {
     entries.addAll(runValidation(testCase::validateAfter, handle, version, "AFTER"));
   } else if (!migrationFailed) {
     entries.add(new MigrationTestEntry(version, "(no tests)", "-", true, ""));
   }

Options

Auto-apply is off → Gitar will not commit updates to this branch.
Display: compact → Showing less information.

Comment with these commands to change:

Auto-apply Compact
gitar auto-apply:on         
gitar display:verbose         

Was this helpful? React with 👍 / 👎 | Gitar

@sonarqubecloud
Copy link

@github-actions
Copy link
Contributor

🟡 Playwright Results — all passed (14 flaky)

✅ 3390 passed · ❌ 0 failed · 🟡 14 flaky · ⏭️ 183 skipped

Shard Passed Failed Flaky Skipped
✅ Shard 1 455 0 0 2
✅ Shard 2 305 0 0 1
🟡 Shard 3 668 0 5 33
🟡 Shard 4 681 0 4 41
🟡 Shard 5 671 0 1 73
🟡 Shard 6 610 0 4 33
🟡 14 flaky test(s) (passed on retry)
  • Features/DataQuality/TestCaseIncidentPermissions.spec.ts › User with TEST_CASE.EDIT_ALL can see edit icon on incidents (shard 3, 1 retry)
  • Features/DataQuality/TestCaseResultPermissions.spec.ts › User with only VIEW cannot PATCH results (shard 3, 1 retry)
  • Features/ImpactAnalysis.spec.ts › Verify column level upstream connections (shard 3, 1 retry)
  • Features/Permissions/GlossaryPermissions.spec.ts › Team-based permissions work correctly (shard 3, 1 retry)
  • Features/RestoreEntityInheritedFields.spec.ts › Validate restore with Inherited domain and data products assigned (shard 3, 1 retry)
  • Flow/ObservabilityAlerts.spec.ts › Alert operations for a user with and without permissions (shard 4, 1 retry)
  • Flow/PersonaFlow.spec.ts › Set default persona for team should work properly (shard 4, 1 retry)
  • Pages/Customproperties-part2.spec.ts › entityReferenceList shows item count, scrollable list, no expand toggle (shard 4, 1 retry)
  • Pages/Domains.spec.ts › Rename domain with tags and glossary terms preserves associations (shard 4, 1 retry)
  • Pages/ExplorePageRightPanel.spec.ts › Should verify deleted user not visible in owner selection for pipeline (shard 5, 1 retry)
  • Pages/HyperlinkCustomProperty.spec.ts › should accept valid http and https URLs (shard 6, 1 retry)
  • Pages/Login.spec.ts › Refresh should work (shard 6, 1 retry)
  • Pages/ODCSImportExport.spec.ts › Multi-object ODCS contract - object selector shows all schema objects (shard 6, 1 retry)
  • Pages/Users.spec.ts › Permissions for table details page for Data Consumer (shard 6, 1 retry)

📦 Download artifacts

How to debug locally
# Download playwright-test-results-<shard> artifact and unzip
npx playwright show-trace path/to/trace.zip    # view trace

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Ingestion safe to test Add this label to run secure Github workflows on PRs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants