feat(api): add browse v2 index#760
Conversation
📝 WalkthroughWalkthroughAdded a Browse V2 implementation: new DB schema (BrowseDirs/Entries/DirCounts), SQL/path resolution and populate pipeline, API instrumentation and system-root deduplication, tests for dedupe and browse v2 behavior, and a hashed cached service-binary mechanism in the daemon. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant API as API (media_browse)
participant Browse as Browse Logic
participant DB as Database
participant Pop as Populate Job
Client->>API: Browse request (path, scheme, cursor)
API->>Browse: build system-root candidates + dedupe
Browse->>DB: sqlBrowseV2Ready? -> check DBConfig + BrowseDirs
alt V2 ready
Browse->>DB: resolve dir -> read BrowseDirCounts / BrowseEntries
DB-->>Browse: dirID, counts, entries
Browse-->>API: BrowseResults (v2)
else V2 not ready
Browse->>DB: legacy fallback queries (route counts / media)
DB-->>Browse: legacy results
Browse-->>API: BrowseResults (fallback)
end
Note over API,DB: Instrumentation logs (logBrowseTiming) emitted
Pop->>DB: (async) sqlPopulateBrowseCache -> build BrowseDirs/Entries/DirCounts
DB-->>Pop: commit -> set DBConfig BrowseIndexVersion="2"
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Review rate limit: 4/5 reviews remaining, refill in 12 minutes. Comment |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
pkg/service/daemon/daemon.go (1)
261-454: 🏗️ Heavy liftInject the filesystem into this new cache/manifest path.
This adds a large new filesystem surface in testable code, but it is still hard-wired to
os.*. Threading anafero.Fsthrough these helpers would make error-path coverage and future changes much easier, and it matches the repo standard for testable filesystem code.As per coding guidelines "Use afero for filesystem operations in testable code".
Also applies to: 575-625
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/service/daemon/daemon.go` around lines 261 - 454, The code currently uses the global os package for all filesystem operations in prepareBinary, copyServiceBinary, readServiceBinaryManifest, writeServiceBinaryManifest (and related helpers like serviceCachePath/serviceManifestName usage), which prevents unit-testing error paths; change the Service struct to carry an afero.Fs field (e.g., fs afero.Fs) and update prepareBinary, copyServiceBinary, readServiceBinaryManifest, writeServiceBinaryManifest (and any helpers they call such as serviceCachePath, serviceCachePathValid, and DataDir usage) to use fs methods (Open, Stat, CreateTemp -> afero.TempFile or afero.WriteFile/Remove/Rename equivalents) instead of os/io/os.File calls; ensure temp file creation/permission setting and renames use afero-compatible APIs and propagate fs through helper functions and tests so all filesystem interactions are mockable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/database/mediadb/migrations/20260430054834_browse_v2.sql`:
- Around line 63-99: The Down migration is missing recreation of legacy indexes
dropped in Up; restore the original DDL for the removed indexes
(idx_browsedircounts_child_system, idx_browsedircounts_parent_system,
idx_browseentries_parent_system_file, idx_browseentries_parent_system_name,
idx_browsedirs_parent_name) in the Down section along with any tables
(BrowseDirCounts, BrowseEntries, BrowseDirs) you recreate, ensuring the index
definitions match their original columns and constraints (including any
UNIQUE/partial/indexed expressions) so rollback fully restores pre-v2 behavior.
In `@pkg/database/mediadb/sql_browse_cache.go`:
- Around line 150-276: The browse cache helpers assume POSIX-normalized paths
but may receive non-normalized Media.Path values, causing mismatched keys;
normalize paths using filepath.Clean followed by filepath.ToSlash (and preserve
scheme prefixes like "scheme://") before any processing. Update
browseV2Builder.addMedia (and/or countPairsForPath) to canonicalize the incoming
mediaPath, and ensure browseV2AncestorDirs and browseV2ParentAndFileName operate
on the normalized path; keep special-case handling for "://"-style URIs but run
normalization on the path portion to prevent mismatched directory keys.
In `@pkg/service/daemon/daemon.go`:
- Around line 506-509: serviceCachePath currently preserves any binPath
extension which breaks isServiceCacheFilename and downstream logic
(prepareBinary, Running, Stop, cache cleanup); change serviceCachePath to only
keep the .sh extension and strip all other extensions (i.e., return
"zaparoo.<shortHash>" for non-.sh binaries, and "zaparoo.<shortHash>.sh" when
binPath had .sh), and verify/update isServiceCacheFilename (and any callers in
prepareBinary, Running, Stop, cleanup) continue to accept the same two forms (no
extension or .sh) so naming and validation are consistent.
---
Nitpick comments:
In `@pkg/service/daemon/daemon.go`:
- Around line 261-454: The code currently uses the global os package for all
filesystem operations in prepareBinary, copyServiceBinary,
readServiceBinaryManifest, writeServiceBinaryManifest (and related helpers like
serviceCachePath/serviceManifestName usage), which prevents unit-testing error
paths; change the Service struct to carry an afero.Fs field (e.g., fs afero.Fs)
and update prepareBinary, copyServiceBinary, readServiceBinaryManifest,
writeServiceBinaryManifest (and any helpers they call such as serviceCachePath,
serviceCachePathValid, and DataDir usage) to use fs methods (Open, Stat,
CreateTemp -> afero.TempFile or afero.WriteFile/Remove/Rename equivalents)
instead of os/io/os.File calls; ensure temp file creation/permission setting and
renames use afero-compatible APIs and propagate fs through helper functions and
tests so all filesystem interactions are mockable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1a78af1d-6892-4562-b43e-0405cfcf2880
📒 Files selected for processing (17)
pkg/api/methods/media_browse.gopkg/api/methods/media_browse_test.gopkg/database/mediadb/letterfilter.gopkg/database/mediadb/letterfilter_test.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/mediadb_integration_test.gopkg/database/mediadb/migrations/20260430054834_browse_v2.sqlpkg/database/mediadb/optimization_test.gopkg/database/mediadb/sql_browse.gopkg/database/mediadb/sql_browse_cache.gopkg/database/mediadb/sql_browse_test.gopkg/database/mediadb/sql_config.gopkg/database/mediadb/sql_media.gopkg/database/mediascanner/mediascanner.gopkg/database/mediascanner/mediascanner_test.gopkg/service/daemon/daemon.gopkg/service/daemon/daemon_test.go
💤 Files with no reviewable changes (1)
- pkg/database/mediascanner/mediascanner.go
| DROP INDEX IF EXISTS idx_browsedircounts_child_system; | ||
| DROP INDEX IF EXISTS idx_browsedircounts_parent_system; | ||
| DROP INDEX IF EXISTS idx_browseentries_parent_system_file; | ||
| DROP INDEX IF EXISTS idx_browseentries_parent_system_name; | ||
| DROP INDEX IF EXISTS idx_browsedirs_parent_name; | ||
|
|
||
| DROP TABLE IF EXISTS BrowseDirCounts; | ||
| DROP TABLE IF EXISTS BrowseEntries; | ||
| DROP TABLE IF EXISTS BrowseDirs; | ||
|
|
||
| CREATE TABLE IF NOT EXISTS BrowseCache ( | ||
| DirPath text primary key, | ||
| ParentPath text not null, | ||
| Name text not null, | ||
| FileCount integer not null, | ||
| IsVirtual bool default false | ||
| ); | ||
|
|
||
| CREATE INDEX IF NOT EXISTS idx_browsecache_parent ON BrowseCache(ParentPath); | ||
| CREATE INDEX IF NOT EXISTS idx_browsecache_virtual ON BrowseCache(IsVirtual); | ||
|
|
||
| CREATE TABLE IF NOT EXISTS BrowseSystemCache ( | ||
| SystemDBID integer not null, | ||
| DirPath text not null, | ||
| ParentPath text not null, | ||
| Name text not null, | ||
| FileCount integer not null, | ||
| IsVirtual bool default false, | ||
| primary key (SystemDBID, DirPath), | ||
| foreign key (SystemDBID) references Systems (DBID) on delete cascade | ||
| ); | ||
|
|
||
| CREATE INDEX IF NOT EXISTS idx_browsesystemcache_parent ON BrowseSystemCache(SystemDBID, ParentPath); | ||
| CREATE INDEX IF NOT EXISTS idx_browsesystemcache_dir ON BrowseSystemCache(DirPath); | ||
| CREATE INDEX IF NOT EXISTS idx_browsesystemcache_virtual ON BrowseSystemCache(SystemDBID, IsVirtual); | ||
|
|
||
| DELETE FROM DBConfig WHERE Name = 'BrowseIndexVersion'; |
There was a problem hiding this comment.
Down migration does not fully restore the pre-v2 index set.
Up drops several legacy indexes, but Down only recreates a subset. That makes rollback non-symmetric and can materially degrade downgraded browse/query performance. Please restore all indexes dropped in Up (using their original DDL) in the Down section.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/database/mediadb/migrations/20260430054834_browse_v2.sql` around lines 63
- 99, The Down migration is missing recreation of legacy indexes dropped in Up;
restore the original DDL for the removed indexes
(idx_browsedircounts_child_system, idx_browsedircounts_parent_system,
idx_browseentries_parent_system_file, idx_browseentries_parent_system_name,
idx_browsedirs_parent_name) in the Down section along with any tables
(BrowseDirCounts, BrowseEntries, BrowseDirs) you recreate, ensuring the index
definitions match their original columns and constraints (including any
UNIQUE/partial/indexed expressions) so rollback fully restores pre-v2 behavior.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
pkg/database/mediadb/sql_browse_test.go (1)
92-111: ⚡ Quick winAdd the no-virtual-root case here.
This only covers “the virtual root exists but has no children.” The v2 builder creates
BrowseDirs.Path = ""only when at least one://entry is indexed, so a ready index with only filesystem media will not have that row at all. Please add thesql.ErrNoRowslookup case and assert thatsqlBrowseVirtualSchemesstill returns an empty slice instead of bubbling an error.Example test case
func TestSqlBrowseVirtualSchemesV2_ReturnsEmptyWithoutMediaFallback(t *testing.T) { t.Parallel() db, mock, err := sqlmock.New() require.NoError(t, err) defer func() { _ = db.Close() }() @@ assert.Empty(t, results) require.NoError(t, mock.ExpectationsWereMet()) } + +func TestSqlBrowseVirtualSchemesV2_ReturnsEmptyWhenVirtualRootIsMissing(t *testing.T) { + t.Parallel() + db, mock, err := sqlmock.New() + require.NoError(t, err) + defer func() { _ = db.Close() }() + + expectBrowseV2Ready(mock) + mock.ExpectQuery("SELECT DBID FROM BrowseDirs WHERE Path = "). + WithArgs(""). + WillReturnError(sql.ErrNoRows) + + results, err := sqlBrowseVirtualSchemes(context.Background(), db, database.BrowseVirtualSchemesOptions{ + Systems: []systemdefs.System{{ID: "SNES"}}, + }) + require.NoError(t, err) + assert.Empty(t, results) + require.NoError(t, mock.ExpectationsWereMet()) +}As per coding guidelines "Write tests for all new code — follow TESTING.md and pkg/testing/README.md patterns".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse_test.go` around lines 92 - 111, The test TestSqlBrowseVirtualSchemesV2_ReturnsEmptyWithoutMediaFallback currently only covers the case where BrowseDirs.Path = "" exists but has no children; add handling for the missing virtual-root row by mocking the DBID lookup to return sql.ErrNoRows (use mock.ExpectQuery for the "SELECT DBID FROM BrowseDirs WHERE Path =" query and .WillReturnError(sql.ErrNoRows)), then call sqlBrowseVirtualSchemes and assert it returns an empty slice with no error (require.NoError + assert.Empty), keeping existing expectBrowseV2Ready and final mock.ExpectationsWereMet() checks; this ensures sqlBrowseVirtualSchemes handles the no-row case instead of bubbling an error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/service/daemon/daemon.go`:
- Around line 293-301: Cached service binaries are reused based on metadata/hash
but their filesystem mode may lack the execute bit; update the reuse paths in
the daemon where you check cachedInfo (the block using
manifest.matchesServiceMetadata and the similar block later) to normalize
permissions before returning by calling os.Chmod(manifest.ServicePath, 0o700)
(or use a helper used by prepareBinary) so the file is guaranteed executable;
ensure any error from Chmod is handled/returned so Start()/restart won't get a
permission denied at runtime.
---
Nitpick comments:
In `@pkg/database/mediadb/sql_browse_test.go`:
- Around line 92-111: The test
TestSqlBrowseVirtualSchemesV2_ReturnsEmptyWithoutMediaFallback currently only
covers the case where BrowseDirs.Path = "" exists but has no children; add
handling for the missing virtual-root row by mocking the DBID lookup to return
sql.ErrNoRows (use mock.ExpectQuery for the "SELECT DBID FROM BrowseDirs WHERE
Path =" query and .WillReturnError(sql.ErrNoRows)), then call
sqlBrowseVirtualSchemes and assert it returns an empty slice with no error
(require.NoError + assert.Empty), keeping existing expectBrowseV2Ready and final
mock.ExpectationsWereMet() checks; this ensures sqlBrowseVirtualSchemes handles
the no-row case instead of bubbling an error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b8457163-0a35-46b6-99b7-1d1d554fbc84
📒 Files selected for processing (4)
pkg/database/mediadb/sql_browse_cache.gopkg/database/mediadb/sql_browse_test.gopkg/service/daemon/daemon.gopkg/service/daemon/daemon_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/service/daemon/daemon_test.go
There was a problem hiding this comment.
🧹 Nitpick comments (3)
pkg/database/mediadb/sql_browse_test.go (2)
50-60: ⚡ Quick winUse
filepath.Joinfor non-normalization test paths instead of hardcoded POSIX literals.In non-normalization tests, path literals like
"/roms/"and"/media/fat/games/"violate the repository path-construction rule. Build those values viafilepath.Join(and add trailing separators where needed). Keep hardcoded mixed/URI strings only in normalization-specific tests.Suggested pattern
+import "path/filepath" - WithArgs("/roms/", "/roms/") + romsRoot := filepath.Join(string(filepath.Separator), "roms") + string(filepath.Separator) + WithArgs(romsRoot, romsRoot) - Routes: []string{"/media/fat/games/SNES"}, + snesRoute := filepath.Join(string(filepath.Separator), "media", "fat", "games", "SNES") + Routes: []string{snesRoute},As per coding guidelines:
Use filepath.Join for path construction everywhere, including test files — never hardcode POSIX-style paths like "/roms/snes/game.sfc" as string literals.Also applies to: 80-85, 141-149, 218-227
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse_test.go` around lines 50 - 60, The test uses hardcoded POSIX path literals (e.g. "/media/fat/games/" passed into BrowseDirectoriesOptions.PathPrefix in the sql_browse_test.go test that calls sqlBrowseDirectories) which violates the repo rule; replace those literals with filepath.Join calls to construct the path (and append a trailing separator when the test expects one) so the test is platform-neutral. Update all similar occurrences listed in the comment (around lines 80-85, 141-149, 218-227) to use filepath.Join instead of hardcoded strings, keeping any normalization-specific tests using explicit mixed/URI strings only.
35-39: Anchorsqlmock.ExpectQuerypatterns to prevent overmatch.All 14 ExpectQuery patterns in this file use unanchored regexes (missing
^and$). This allows them to match unintended SQL and reduces test fidelity. Use anchored patterns withregexp.QuoteMeta(...)to match only exact queries.Affected lines: 35, 38, 49, 52, 76, 79, 99, 102, 121, 140, 143, 217, 220, 223.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse_test.go` around lines 35 - 39, The sqlmock.ExpectQuery calls (e.g., the ones expecting "SELECT Value FROM DBConfig WHERE Name = " and "SELECT 1 FROM BrowseDirs LIMIT 1") use unanchored regexes and should be made exact: replace the raw string patterns in the sqlmock.ExpectQuery invocations with anchored, quoted regexes using regexp.QuoteMeta and surrounding ^...$ so they only match the intended SQL (apply to all ExpectQuery uses in this file, including those referencing DBConfigBrowseIndexVersion and BrowseDirs); keep the same WithArgs/WillReturnRows calls unchanged.pkg/service/daemon/daemon.go (1)
877-895: 💤 Low valueConsider returning a copy to avoid in-place mutation.
serviceExecEnvmutates the input slice when updating an existing entry. While current callers passos.Environ()(which returns a new slice each time), this pattern could be a footgun for future callers expecting immutable behavior.🔧 Optional: defensive copy before mutation
func serviceExecEnv(env []string, appPath string) []string { appEnv := fmt.Sprintf("%s=%s", config.AppEnv, appPath) + result := make([]string, len(env)) + copy(result, env) - for i, value := range env { + for i, value := range result { if strings.HasPrefix(value, config.AppEnv+"=") { - env[i] = appEnv - return env + result[i] = appEnv + return result } } - return append(env, appEnv) + return append(result, appEnv) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/service/daemon/daemon.go` around lines 877 - 895, serviceExecEnv currently mutates the incoming env slice when replacing an existing APP_ENV entry which can be surprising to callers; change it to operate on and return a copy instead: when you find a matching entry in serviceExecEnv, create a new slice copy (e.g., copy the original into a new []string) and replace the element in that copy before returning, and when appending the new appEnv use a copy+append so the original env remains unchanged; keep the same behavior and signature of serviceExecEnv and leave serviceExecArgs unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pkg/database/mediadb/sql_browse_test.go`:
- Around line 50-60: The test uses hardcoded POSIX path literals (e.g.
"/media/fat/games/" passed into BrowseDirectoriesOptions.PathPrefix in the
sql_browse_test.go test that calls sqlBrowseDirectories) which violates the repo
rule; replace those literals with filepath.Join calls to construct the path (and
append a trailing separator when the test expects one) so the test is
platform-neutral. Update all similar occurrences listed in the comment (around
lines 80-85, 141-149, 218-227) to use filepath.Join instead of hardcoded
strings, keeping any normalization-specific tests using explicit mixed/URI
strings only.
- Around line 35-39: The sqlmock.ExpectQuery calls (e.g., the ones expecting
"SELECT Value FROM DBConfig WHERE Name = " and "SELECT 1 FROM BrowseDirs LIMIT
1") use unanchored regexes and should be made exact: replace the raw string
patterns in the sqlmock.ExpectQuery invocations with anchored, quoted regexes
using regexp.QuoteMeta and surrounding ^...$ so they only match the intended SQL
(apply to all ExpectQuery uses in this file, including those referencing
DBConfigBrowseIndexVersion and BrowseDirs); keep the same
WithArgs/WillReturnRows calls unchanged.
In `@pkg/service/daemon/daemon.go`:
- Around line 877-895: serviceExecEnv currently mutates the incoming env slice
when replacing an existing APP_ENV entry which can be surprising to callers;
change it to operate on and return a copy instead: when you find a matching
entry in serviceExecEnv, create a new slice copy (e.g., copy the original into a
new []string) and replace the element in that copy before returning, and when
appending the new appEnv use a copy+append so the original env remains
unchanged; keep the same behavior and signature of serviceExecEnv and leave
serviceExecArgs unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4c5a8669-cde2-434d-87cb-af7d92b41cf1
📒 Files selected for processing (3)
pkg/database/mediadb/sql_browse_test.gopkg/service/daemon/daemon.gopkg/service/daemon/daemon_test.go
Summary
Summary by CodeRabbit
Bug Fixes
Refactor
Tests