feat(api): add system-aware media browse#757
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds system-scoped media browsing: API accepts Changes
Sequence DiagramsequenceDiagram
participant Client
participant Handler as API Handler
participant DBI as Database Interface
participant SQL as SQL Layer
participant SysCache as BrowseSystemCache
participant Media as Media Table
Client->>Handler: GET /media/browse?systems=["sys1"] (optional path)
Handler->>Handler: Resolve systems (fuzzy?/lookup)
alt No path (root discovery)
Handler->>DBI: BrowseRouteCounts(opts{Systems})
DBI->>SQL: sqlBrowseRouteCounts(opts)
SQL->>SysCache: Query per-route cached counts (for Systems)
alt Cache covers route
SysCache-->>SQL: return route counts + SystemIDs
else Partial/miss
SQL->>Media: Query Media filtered by Systems for missing routes
Media-->>SQL: return counts + SystemIDs
end
SQL-->>DBI: map[route]BrowseRouteCount (FileCount,SystemIDs)
DBI-->>Handler: route results
Handler-->>Client: root entries with systemIds/systemId
else Path provided (directory browse)
Handler->>DBI: BrowseDirectories(opts{Path,Systems})
DBI->>SQL: sqlBrowseDirectories(opts)
SQL->>SysCache: try per-system directory cache
alt Cache hit or merged
SysCache-->>SQL: directory entries with SystemIDs
else Fallback
SQL->>Media: Query Media WHERE PathPrefix AND Systems
Media-->>SQL: directory/file counts and SystemIDs
end
SQL-->>DBI: []BrowseDirectoryResult (with SystemIDs)
DBI-->>Handler: directory listing
Handler-->>Client: filtered directory listing
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Review rate limit: 4/5 reviews remaining, refill in 12 minutes. Comment |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/database/mediadb/sql_browse_cache.go (1)
95-138:⚠️ Potential issue | 🟠 MajorInvalidate
BrowseSystemCacheon writes, or system-scoped browse will serve stale rows.The new cache is only rebuilt here, but the normal mutation paths in
pkg/database/mediadb/mediadb.gonever clear it. Since the system-scoped browse SQL only falls back to directMediareads when the cache returns no rows, a selective reindex/truncate can leave old per-system entries in place and those stale counts/directories will keep winning until background optimization happens.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse_cache.go` around lines 95 - 138, The browse system cache must be invalidated on any Media writes; update all mutation paths in pkg/database/mediadb/mediadb.go that modify the Media table (e.g., functions like UpsertMedia, InsertMedia, UpdateMedia, DeleteMedia, BulkInsertMedia or any methods that run INSERT/UPDATE/DELETE against Media) to execute a cache invalidation SQL (either tx.ExecContext(ctx, "DELETE FROM BrowseSystemCache") for full invalidation or tx.ExecContext(ctx, "DELETE FROM BrowseSystemCache WHERE SystemDBID = ?", systemDBID) when the change is scoped to a single system) within the same transaction (or immediately after a successful commit) so system-scoped browse queries cannot return stale rows. Ensure you reuse the transaction variable (tx) and context (ctx) used by those mutation functions and handle/propagate any ExecContext errors consistently with existing error handling.
🧹 Nitpick comments (2)
pkg/api/methods/media_browse_test.go (1)
152-195: Usefilepath.Joinfor the new test paths.These new cases add more hardcoded POSIX paths (
"/roms/..."). Please build them withfilepath.Joinand normalize only where the assertion needs slash-separated output.As per coding guidelines, "Use
filepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like "/roms/snes/game.sfc" as string literals".Also applies to: 290-336
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/methods/media_browse_test.go` around lines 152 - 195, In TestHandleMediaBrowse_SystemRootRoutes replace hardcoded POSIX paths like "/roms/SNES" and "/roms/shared" with platform-independent construction using filepath.Join (e.g., filepath.Join("/roms", "SNES")) wherever those strings are created (mockPlatform.RootDirs/Launchers returns and mockMediaDB.On BrowseRouteCounts expectations and return map keys); when your assertion expects slash-separated output, normalize only at assertion time (convert the joined path to the expected form) instead of hardcoding POSIX literals. Ensure the same change is applied to the similar cases referenced at lines 290-336.pkg/database/mediadb/sql_browse_test.go (1)
118-241: Please stop hardcoding POSIX paths in the new SQL tests.The added expectations and inputs should be assembled with
filepath.Joininstead of string literals like"/roms/shared/".As per coding guidelines, "Use
filepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like "/roms/snes/game.sfc" as string literals".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse_test.go` around lines 118 - 241, The tests are hardcoding POSIX paths (e.g. "/roms/shared/", "/roms/SNES/") — update each test (TestSqlBrowseDirectories_WithSystemsUsesSystemCache, TestSqlBrowseDirectories_WithSystemsReadsFromMediaWhenCacheEmpty, TestSqlBrowseRouteCounts_ReturnsOnlyPopulatedRoutes, TestSqlBrowseRouteCounts_ReadsFromMediaWhenSystemCacheEmpty) to construct paths via filepath.Join (and use that joined value wherever used: the sqlmock.WithArgs calls, the BrowseDirectoriesOptions.PathPrefix, BrowseRouteCountsOptions.Routes entries, and any expected argument values), and add the necessary import for "path/filepath" at the top of the test file so all path literals are replaced by filepath.Join(...) calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/api/methods/media_browse.go`:
- Around line 272-282: buildSystemBrowseRouteCandidates currently adds absolute
launcher folders and joined relative folders without ensuring they live inside
allowed rootDirs, causing browseFilesystem to later reject them; update the
logic in buildSystemBrowseRouteCandidates to skip any candidate path that is not
under at least one entry in rootDirs before calling addRoute. For absolute
folders (launcher.Folders when filepath.IsAbs) and for each cleaned joined path
(filepath.Join(root, folder)), compute membership against each rootDir using
filepath.Rel (or equivalent) and only add the route if the relative path does
not start with ".." and no error occurred; reference
buildSystemBrowseRouteCandidates, launcher.Folders, rootDirs, and addRoute when
making the change.
In `@pkg/database/mediadb/mediadb.go`:
- Around line 626-628: The UpdateLastGenerated function currently passes the
full stored system list into invalidateCaches, bypassing the cap you added in
the CommitTransaction path; modify UpdateLastGenerated to compute an
invalidationScope using the same cap logic (maxSelectiveInvalidationSystems)
before calling invalidateCaches (same behavior as the block you added that
returns invalidationScope{AllSystems:true} when len(systemIDs) >
maxSelectiveInvalidationSystems), so that UpdateLastGenerated and NewNamesIndex
both use the capped selective invalidation path rather than triggering a huge
selective invalidation.
In `@pkg/database/mediadb/sql_browse.go`:
- Around line 94-103: sqlBrowseDirectoriesForSystemsFromCache currently treats
any non-empty cache result as a full hit and returns it, which causes partial
cache coverage (e.g., SNES present but Genesis missing) to drop systems and
routes; change the flow so that sqlBrowseDirectoriesForSystemsFromCache is used
to read cached rows but, when its returned set does not cover all requested
system IDs, call sqlBrowseDirectoriesForSystemsFromMedia to fetch missing
systems and then merge the results (union routes/directories and union
SystemIDs) before returning; update callers (the function wrapping this return
and the similar logic around lines 416-437) to perform the same merge behavior
rather than returning early on non-empty cache results, ensuring
BrowseSystemCache entries are supplemented, not treated as authoritative unless
they cover all requested systems.
---
Outside diff comments:
In `@pkg/database/mediadb/sql_browse_cache.go`:
- Around line 95-138: The browse system cache must be invalidated on any Media
writes; update all mutation paths in pkg/database/mediadb/mediadb.go that modify
the Media table (e.g., functions like UpsertMedia, InsertMedia, UpdateMedia,
DeleteMedia, BulkInsertMedia or any methods that run INSERT/UPDATE/DELETE
against Media) to execute a cache invalidation SQL (either tx.ExecContext(ctx,
"DELETE FROM BrowseSystemCache") for full invalidation or tx.ExecContext(ctx,
"DELETE FROM BrowseSystemCache WHERE SystemDBID = ?", systemDBID) when the
change is scoped to a single system) within the same transaction (or immediately
after a successful commit) so system-scoped browse queries cannot return stale
rows. Ensure you reuse the transaction variable (tx) and context (ctx) used by
those mutation functions and handle/propagate any ExecContext errors
consistently with existing error handling.
---
Nitpick comments:
In `@pkg/api/methods/media_browse_test.go`:
- Around line 152-195: In TestHandleMediaBrowse_SystemRootRoutes replace
hardcoded POSIX paths like "/roms/SNES" and "/roms/shared" with
platform-independent construction using filepath.Join (e.g.,
filepath.Join("/roms", "SNES")) wherever those strings are created
(mockPlatform.RootDirs/Launchers returns and mockMediaDB.On BrowseRouteCounts
expectations and return map keys); when your assertion expects slash-separated
output, normalize only at assertion time (convert the joined path to the
expected form) instead of hardcoding POSIX literals. Ensure the same change is
applied to the similar cases referenced at lines 290-336.
In `@pkg/database/mediadb/sql_browse_test.go`:
- Around line 118-241: The tests are hardcoding POSIX paths (e.g.
"/roms/shared/", "/roms/SNES/") — update each test
(TestSqlBrowseDirectories_WithSystemsUsesSystemCache,
TestSqlBrowseDirectories_WithSystemsReadsFromMediaWhenCacheEmpty,
TestSqlBrowseRouteCounts_ReturnsOnlyPopulatedRoutes,
TestSqlBrowseRouteCounts_ReadsFromMediaWhenSystemCacheEmpty) to construct paths
via filepath.Join (and use that joined value wherever used: the sqlmock.WithArgs
calls, the BrowseDirectoriesOptions.PathPrefix, BrowseRouteCountsOptions.Routes
entries, and any expected argument values), and add the necessary import for
"path/filepath" at the top of the test file so all path literals are replaced by
filepath.Join(...) calls.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 62246568-5a59-478f-8595-9adabea908ff
📒 Files selected for processing (16)
docs/api/methods.mdpkg/api/methods/media_browse.gopkg/api/methods/media_browse_test.gopkg/api/models/params.gopkg/api/models/responses.gopkg/database/database.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/mediadb_integration_test.gopkg/database/mediadb/migrations/20260429142159_system_browse_cache.sqlpkg/database/mediadb/optimization_test.gopkg/database/mediadb/slug_cache.gopkg/database/mediadb/sql_browse.gopkg/database/mediadb/sql_browse_cache.gopkg/database/mediadb/sql_browse_test.gopkg/database/mediascanner/mediascanner.gopkg/testing/helpers/db_mocks.go
| if launcher.SkipFilesystemScan { | ||
| continue | ||
| } | ||
| for _, folder := range launcher.Folders { | ||
| if filepath.IsAbs(folder) { | ||
| addRoute(filepath.ToSlash(filepath.Clean(folder))) | ||
| continue | ||
| } | ||
| for _, root := range rootDirs { | ||
| addRoute(filepath.ToSlash(filepath.Clean(filepath.Join(root, folder)))) | ||
| } |
There was a problem hiding this comment.
Don't return filesystem routes that the next browse call will reject.
buildSystemBrowseRouteCandidates currently accepts absolute launcher folders and ../-escaping relative folders after filepath.Join, but browseFilesystem later blocks anything outside RootDirs. That lets media.browse return root entries that immediately fail with “not within an allowed root directory” when the client follows them.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/api/methods/media_browse.go` around lines 272 - 282,
buildSystemBrowseRouteCandidates currently adds absolute launcher folders and
joined relative folders without ensuring they live inside allowed rootDirs,
causing browseFilesystem to later reject them; update the logic in
buildSystemBrowseRouteCandidates to skip any candidate path that is not under at
least one entry in rootDirs before calling addRoute. For absolute folders
(launcher.Folders when filepath.IsAbs) and for each cleaned joined path
(filepath.Join(root, folder)), compute membership against each rootDir using
filepath.Rel (or equivalent) and only add the route if the relative path does
not start with ".." and no error occurred; reference
buildSystemBrowseRouteCandidates, launcher.Folders, rootDirs, and addRoute when
making the change.
| if len(systemIDs) > maxSelectiveInvalidationSystems { | ||
| return invalidationScope{AllSystems: true} | ||
| } |
There was a problem hiding this comment.
Apply the invalidation cap in UpdateLastGenerated() too.
This helper only protects the CommitTransaction() path. NewNamesIndex() still calls db.UpdateLastGenerated() before clearing IndexingSystems, and UpdateLastGenerated() still feeds the full stored system list straight into invalidateCaches(). On a full-library run, that means you still execute the huge selective invalidation you were trying to avoid, so the new cap never helps at the end of indexing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/database/mediadb/mediadb.go` around lines 626 - 628, The
UpdateLastGenerated function currently passes the full stored system list into
invalidateCaches, bypassing the cap you added in the CommitTransaction path;
modify UpdateLastGenerated to compute an invalidationScope using the same cap
logic (maxSelectiveInvalidationSystems) before calling invalidateCaches (same
behavior as the block you added that returns invalidationScope{AllSystems:true}
when len(systemIDs) > maxSelectiveInvalidationSystems), so that
UpdateLastGenerated and NewNamesIndex both use the capped selective invalidation
path rather than triggering a huge selective invalidation.
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
pkg/database/mediadb/sql_browse.go (1)
422-455:⚠️ Potential issue | 🟠 MajorAdd the same missing-system fallback here.
sqlBrowseVirtualSchemesForSystemsstill treatsBrowseSystemCacheas authoritative. After an upgrade, if only some requested systems have populated system-cache rows, this will return an incomplete scheme list and incompleteSystemIDsfor the rest. It needs the same cache+media merge strategy thatsqlBrowseDirectoriesForSystemsandsqlBrowseRouteCountsnow use.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse.go` around lines 422 - 455, sqlBrowseVirtualSchemesForSystems currently treats BrowseSystemCache as authoritative and can return incomplete results when some systems lack cache rows; update it to use the same cache+media merge fallback used by sqlBrowseDirectoriesForSystems and sqlBrowseRouteCounts. Specifically, in sqlBrowseVirtualSchemesForSystems detect missing SystemIDs from BrowseSystemCache, fetch the missing data from the media tables (same queries/joins used by the other functions), and merge cache rows with media-derived rows so the final returned scheme list and SystemIDs include entries for systems without cache rows. Ensure you reference BrowseSystemCache and the existing merge logic in sqlBrowseDirectoriesForSystems/sqlBrowseRouteCounts when implementing the fallback so behavior is consistent.
🧹 Nitpick comments (1)
pkg/api/methods/media_browse_test.go (1)
303-353: Add a filtered virtual-browse test as well.The new coverage verifies
systemspropagation for filesystem browsing, butbrowseVirtual()now threads the same filter into bothBrowseFilesandBrowseFileCount. A regression onsteam://-style browsing would still pass this suite.As per coding guidelines, "Write tests for all new code — follow TESTING.md and pkg/testing/README.md patterns".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/methods/media_browse_test.go` around lines 303 - 353, Test coverage is missing for virtual browsing with a systems filter: add a new unit test (modeled on TestHandleMediaBrowse_FilesystemFiltersBySystem) that targets browseVirtual() behavior by supplying a virtual path (e.g., "steam://" or other non-filesystem path) and a Systems filter, and assert that mockMediaDB receives the systems constraint for both BrowseFiles and BrowseFileCount calls; specifically create a TestHandleMediaBrowse_VirtualFiltersBySystem that sets Path to a virtual scheme, configures mockPlatform/ mockMediaDB expectations for BrowseFiles and BrowseFileCount (and BrowseDirectories if applicable), calls HandleMediaBrowse(env), and verifies the results and mockMediaDB.AssertExpectations to ensure the systems filter is propagated into browseVirtual()'s BrowseFiles and BrowseFileCount code paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/api/methods/media_browse_test.go`:
- Around line 108-110: browseTestAbsPath currently builds a pseudo-absolute path
(starts with string(filepath.Separator)) which on Windows yields "\roms" and
does not hit filepath.IsAbs; change it to produce a real OS-rooted absolute path
by querying the current working directory, extracting the volume/root with
filepath.VolumeName(os.Getwd()), use that volume (if non-empty) +
string(filepath.Separator) as the root, otherwise use string(filepath.Separator)
for POSIX, then call filepath.Join(root, parts...) so browseTestAbsPath returns
a true absolute path that exercises filepath.IsAbs.
In `@pkg/database/mediadb/mediadb.go`:
- Around line 208-214: invalidateBrowseCacheForMediaChange currently returns the
error from sqlInvalidateBrowseCache which causes post-commit cache-cleanup
failures to be treated as write failures; change its non-transaction path so it
calls sqlInvalidateBrowseCache(db.ctx, db.conn(), systemDBIDs, len(systemDBIDs)
== 0) but does not propagate the returned error to callers (InsertMedia,
UpdateMediaTitle, BulkSetMediaMissing, ResetMissingFlags); instead swallow the
error (optionally log it via db.logger or a debug logger) and always return nil
so cache invalidation is treated as follow-up cleanup rather than the mutation
result.
- Around line 216-225: flushBrowseCacheInvalidation currently clears the
in-memory dirty flags unconditionally which loses retry state on
sqlInvalidateBrowseCache failure; change it so you call err :=
sqlInvalidateBrowseCache(db.ctx, db.sql, systemDBIDs, db.browseCacheDirtyAll)
and only call db.clearBrowseCacheInvalidation() after err == nil, returning the
err immediately when non-nil so the dirty state is preserved for retry (refer to
function flushBrowseCacheInvalidation and method clearBrowseCacheInvalidation).
---
Duplicate comments:
In `@pkg/database/mediadb/sql_browse.go`:
- Around line 422-455: sqlBrowseVirtualSchemesForSystems currently treats
BrowseSystemCache as authoritative and can return incomplete results when some
systems lack cache rows; update it to use the same cache+media merge fallback
used by sqlBrowseDirectoriesForSystems and sqlBrowseRouteCounts. Specifically,
in sqlBrowseVirtualSchemesForSystems detect missing SystemIDs from
BrowseSystemCache, fetch the missing data from the media tables (same
queries/joins used by the other functions), and merge cache rows with
media-derived rows so the final returned scheme list and SystemIDs include
entries for systems without cache rows. Ensure you reference BrowseSystemCache
and the existing merge logic in
sqlBrowseDirectoriesForSystems/sqlBrowseRouteCounts when implementing the
fallback so behavior is consistent.
---
Nitpick comments:
In `@pkg/api/methods/media_browse_test.go`:
- Around line 303-353: Test coverage is missing for virtual browsing with a
systems filter: add a new unit test (modeled on
TestHandleMediaBrowse_FilesystemFiltersBySystem) that targets browseVirtual()
behavior by supplying a virtual path (e.g., "steam://" or other non-filesystem
path) and a Systems filter, and assert that mockMediaDB receives the systems
constraint for both BrowseFiles and BrowseFileCount calls; specifically create a
TestHandleMediaBrowse_VirtualFiltersBySystem that sets Path to a virtual scheme,
configures mockPlatform/ mockMediaDB expectations for BrowseFiles and
BrowseFileCount (and BrowseDirectories if applicable), calls
HandleMediaBrowse(env), and verifies the results and
mockMediaDB.AssertExpectations to ensure the systems filter is propagated into
browseVirtual()'s BrowseFiles and BrowseFileCount code paths.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f392b8ca-2268-4ac1-9850-85a90f40e457
📒 Files selected for processing (6)
pkg/api/methods/media_browse.gopkg/api/methods/media_browse_test.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/sql_browse.gopkg/database/mediadb/sql_browse_cache.gopkg/database/mediadb/sql_browse_test.go
✅ Files skipped from review due to trivial changes (1)
- pkg/database/mediadb/sql_browse_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/database/mediadb/sql_browse_cache.go
| func (db *MediaDB) invalidateBrowseCacheForMediaChange(systemDBIDs ...int64) error { | ||
| if db.inTransaction { | ||
| db.markBrowseCacheDirty(systemDBIDs...) | ||
| return nil | ||
| } | ||
| return sqlInvalidateBrowseCache(db.ctx, db.conn(), systemDBIDs, len(systemDBIDs) == 0) | ||
| } |
There was a problem hiding this comment.
Don't surface post-write cache cleanup as the mutation failure.
On this path the primary SQL mutation has already autocommitted. Propagating sqlInvalidateBrowseCache() errors back through callers like InsertMedia, UpdateMediaTitle, BulkSetMediaMissing, and ResetMissingFlags makes successful writes look failed, which invites unsafe retries and duplicate work. Cache invalidation here needs to be treated as follow-up cleanup, not as the write result.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/database/mediadb/mediadb.go` around lines 208 - 214,
invalidateBrowseCacheForMediaChange currently returns the error from
sqlInvalidateBrowseCache which causes post-commit cache-cleanup failures to be
treated as write failures; change its non-transaction path so it calls
sqlInvalidateBrowseCache(db.ctx, db.conn(), systemDBIDs, len(systemDBIDs) == 0)
but does not propagate the returned error to callers (InsertMedia,
UpdateMediaTitle, BulkSetMediaMissing, ResetMissingFlags); instead swallow the
error (optionally log it via db.logger or a debug logger) and always return nil
so cache invalidation is treated as follow-up cleanup rather than the mutation
result.
| func (db *MediaDB) flushBrowseCacheInvalidation() error { | ||
| if !db.browseCacheDirtyAll && len(db.browseCacheDirtyDBIDs) == 0 { | ||
| return nil | ||
| } | ||
| systemDBIDs := make([]int64, 0, len(db.browseCacheDirtyDBIDs)) | ||
| for systemDBID := range db.browseCacheDirtyDBIDs { | ||
| systemDBIDs = append(systemDBIDs, systemDBID) | ||
| } | ||
| err := sqlInvalidateBrowseCache(db.ctx, db.sql, systemDBIDs, db.browseCacheDirtyAll) | ||
| db.clearBrowseCacheInvalidation() |
There was a problem hiding this comment.
Preserve dirty browse-cache state until invalidation succeeds.
flushBrowseCacheInvalidation() always calls clearBrowseCacheInvalidation(), even when sqlInvalidateBrowseCache() fails. After a committed transaction, that drops the only retry state and can leave stale browse results behind permanently after a transient DB error.
Suggested fix
func (db *MediaDB) flushBrowseCacheInvalidation() error {
if !db.browseCacheDirtyAll && len(db.browseCacheDirtyDBIDs) == 0 {
return nil
}
systemDBIDs := make([]int64, 0, len(db.browseCacheDirtyDBIDs))
for systemDBID := range db.browseCacheDirtyDBIDs {
systemDBIDs = append(systemDBIDs, systemDBID)
}
- err := sqlInvalidateBrowseCache(db.ctx, db.sql, systemDBIDs, db.browseCacheDirtyAll)
- db.clearBrowseCacheInvalidation()
- return err
+ if err := sqlInvalidateBrowseCache(db.ctx, db.sql, systemDBIDs, db.browseCacheDirtyAll); err != nil {
+ return err
+ }
+ db.clearBrowseCacheInvalidation()
+ return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (db *MediaDB) flushBrowseCacheInvalidation() error { | |
| if !db.browseCacheDirtyAll && len(db.browseCacheDirtyDBIDs) == 0 { | |
| return nil | |
| } | |
| systemDBIDs := make([]int64, 0, len(db.browseCacheDirtyDBIDs)) | |
| for systemDBID := range db.browseCacheDirtyDBIDs { | |
| systemDBIDs = append(systemDBIDs, systemDBID) | |
| } | |
| err := sqlInvalidateBrowseCache(db.ctx, db.sql, systemDBIDs, db.browseCacheDirtyAll) | |
| db.clearBrowseCacheInvalidation() | |
| func (db *MediaDB) flushBrowseCacheInvalidation() error { | |
| if !db.browseCacheDirtyAll && len(db.browseCacheDirtyDBIDs) == 0 { | |
| return nil | |
| } | |
| systemDBIDs := make([]int64, 0, len(db.browseCacheDirtyDBIDs)) | |
| for systemDBID := range db.browseCacheDirtyDBIDs { | |
| systemDBIDs = append(systemDBIDs, systemDBID) | |
| } | |
| if err := sqlInvalidateBrowseCache(db.ctx, db.sql, systemDBIDs, db.browseCacheDirtyAll); err != nil { | |
| return err | |
| } | |
| db.clearBrowseCacheInvalidation() | |
| return nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/database/mediadb/mediadb.go` around lines 216 - 225,
flushBrowseCacheInvalidation currently clears the in-memory dirty flags
unconditionally which loses retry state on sqlInvalidateBrowseCache failure;
change it so you call err := sqlInvalidateBrowseCache(db.ctx, db.sql,
systemDBIDs, db.browseCacheDirtyAll) and only call
db.clearBrowseCacheInvalidation() after err == nil, returning the err
immediately when non-nil so the dirty state is preserved for retry (refer to
function flushBrowseCacheInvalidation and method clearBrowseCacheInvalidation).
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/api/middleware/ratelimit_test.go (1)
254-261:⚠️ Potential issue | 🟠 MajorStrengthen the close assertion to avoid false-positive passes.
At Line 261,
assert.Errorpasses for any read error (including deadline timeout), so this test can pass even if the server never closed the socket.✅ Suggested test hardening
require.NoError(t, conn.WriteMessage(websocket.TextMessage, []byte("again"))) time.Sleep(50 * time.Millisecond) assert.Equal(t, int32(1), handlerCalls.Load(), "second message should not reach handler") // The server should have closed the connection. - _ = conn.SetReadDeadline(time.Now().Add(100 * time.Millisecond)) - _, _, err = conn.ReadMessage() - assert.Error(t, err, "connection should be closed after rate limit exceeded") + require.Eventually(t, func() bool { + return conn.WriteMessage(websocket.TextMessage, []byte("third")) != nil + }, 500*time.Millisecond, 20*time.Millisecond, "connection should be closed after rate limit exceeded")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/middleware/ratelimit_test.go` around lines 254 - 261, The test currently uses assert.Error after conn.ReadMessage which can pass due to a deadline timeout; change the assertion to verify the connection was actually closed by the server by (1) calling conn.ReadMessage(), capturing err, (2) asserting that err is not a timeout (check net.Error and Timeout()), and (3) asserting that the error is a WebSocket close-type error using websocket.IsCloseError or websocket.IsUnexpectedCloseError against the returned err; update the assertion around conn.ReadMessage and replace the generic assert.Error with these specific checks so the server close is confirmed (refer to conn.ReadMessage, websocket.IsCloseError, and net.Error Timeout()).
🧹 Nitpick comments (2)
pkg/api/middleware/ratelimit_test.go (1)
206-210: PreferEventuallyover fixed sleep for first-message assertion.Line 208 uses a fixed
20mssleep; this can be flaky under CI load. Matching the rest of the test withrequire.Eventuallyis more stable.♻️ Suggested change
// First message should go through. require.NoError(t, conn.WriteMessage(websocket.TextMessage, []byte("hello"))) - time.Sleep(20 * time.Millisecond) - assert.Equal(t, int32(1), handlerCalls.Load(), "first message should be handled") + require.Eventually(t, func() bool { + return handlerCalls.Load() == 1 + }, 500*time.Millisecond, 10*time.Millisecond, "first message should be handled")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/middleware/ratelimit_test.go` around lines 206 - 210, Replace the hardcoded time.Sleep(20 * time.Millisecond) used to wait for the first websocket message with a flaky-resistant wait using require.Eventually; specifically, after calling conn.WriteMessage(websocket.TextMessage, []byte("hello")), use require.Eventually to poll handlerCalls.Load() until it equals int32(1) (matching the existing assert.Equal check), so the test waits up to a reasonable timeout/retry interval instead of sleeping a fixed duration.pkg/api/middleware/ratelimit.go (1)
171-175: Include wait error details in rate-limit warning logs.When
rl.Wait(ctx)fails, loggingerrandwait_timeoutwill make timeout-vs-cancel diagnosis much easier.🔎 Suggested logging improvement
if err := rl.Wait(ctx); err != nil { log.Warn(). + Err(err). Str("ip", host). + Dur("wait_timeout", waitTimeout). Int("msg_size", len(msg)). Msg("WebSocket rate limit exceeded, closing connection")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/middleware/ratelimit.go` around lines 171 - 175, The warning for a failed rl.Wait(ctx) call must include the actual error and the effective wait timeout to aid diagnosis: update the rl.Wait(ctx) error branch to log the error via log.Warn().Err(err) and add a field for the remaining wait duration (derive wait_timeout from ctx.Deadline() using time.Until if a deadline exists, or log a zero/unknown value otherwise); keep existing fields like "ip" (host) and "msg_size" (len(msg)) and update the message string accordingly when calling log.Warn().Err(err).Int64/Str("wait_timeout", ...) so rl.Wait, ctx, host, msg and the log.Warn call are the places to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/database/mediadb/mediadb.go`:
- Around line 191-206: markBrowseCacheDirty currently accumulates systemDBIDs
into browseCacheDirtyDBIDs but never flips browseCacheDirtyAll when the tracked
set exceeds the configured cap, allowing huge per-commit IN clauses; modify the
logic in markBrowseCacheDirty (and the analogous code path around
sqlInvalidateBrowseCache/commit) to check maxSelectiveInvalidationSystems and,
if the number of tracked IDs would exceed that cap, set browseCacheDirtyAll =
true and clear browseCacheDirtyDBIDs to nil so the commit path uses the full
invalidation branch instead of emitting an unbounded IN list. Ensure the same
cap check is applied in the second similar block referenced (the other code that
appends to browseCacheDirtyDBIDs) so both places flip to browseCacheDirtyAll
when the cap is reached.
---
Outside diff comments:
In `@pkg/api/middleware/ratelimit_test.go`:
- Around line 254-261: The test currently uses assert.Error after
conn.ReadMessage which can pass due to a deadline timeout; change the assertion
to verify the connection was actually closed by the server by (1) calling
conn.ReadMessage(), capturing err, (2) asserting that err is not a timeout
(check net.Error and Timeout()), and (3) asserting that the error is a WebSocket
close-type error using websocket.IsCloseError or
websocket.IsUnexpectedCloseError against the returned err; update the assertion
around conn.ReadMessage and replace the generic assert.Error with these specific
checks so the server close is confirmed (refer to conn.ReadMessage,
websocket.IsCloseError, and net.Error Timeout()).
---
Nitpick comments:
In `@pkg/api/middleware/ratelimit_test.go`:
- Around line 206-210: Replace the hardcoded time.Sleep(20 * time.Millisecond)
used to wait for the first websocket message with a flaky-resistant wait using
require.Eventually; specifically, after calling
conn.WriteMessage(websocket.TextMessage, []byte("hello")), use
require.Eventually to poll handlerCalls.Load() until it equals int32(1)
(matching the existing assert.Equal check), so the test waits up to a reasonable
timeout/retry interval instead of sleeping a fixed duration.
In `@pkg/api/middleware/ratelimit.go`:
- Around line 171-175: The warning for a failed rl.Wait(ctx) call must include
the actual error and the effective wait timeout to aid diagnosis: update the
rl.Wait(ctx) error branch to log the error via log.Warn().Err(err) and add a
field for the remaining wait duration (derive wait_timeout from ctx.Deadline()
using time.Until if a deadline exists, or log a zero/unknown value otherwise);
keep existing fields like "ip" (host) and "msg_size" (len(msg)) and update the
message string accordingly when calling
log.Warn().Err(err).Int64/Str("wait_timeout", ...) so rl.Wait, ctx, host, msg
and the log.Warn call are the places to change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 525e9f4b-1ae0-4855-920a-b5e1e764ac34
📒 Files selected for processing (6)
pkg/api/methods/media_browse_test.gopkg/api/middleware/ratelimit.gopkg/api/middleware/ratelimit_test.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/sql_browse.gopkg/database/mediadb/sql_browse_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/database/mediadb/sql_browse.go
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/database/mediadb/sql_browse_cache.go (1)
42-138: 🛠️ Refactor suggestion | 🟠 MajorAdd direct tests for the new system-scoped cache behavior.
The provided coverage only shows statement preparation in
pkg/database/mediadb/optimization_test.go:60-70. Please add behavior tests for per-system row contents/counts and forsqlInvalidateBrowseCache's scoped-vs-full deletion branches.As per coding guidelines, "Write tests for all new code — follow TESTING.md and pkg/testing/README.md patterns".
Also applies to: 142-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_browse_cache.go` around lines 42 - 138, Add unit tests that assert the system-scoped browse cache behavior: create fixtures inserting Media rows with different SystemDBID values and virtual ("scheme://") and filesystem paths, run the code that populates dirCounts/systemDirCounts (exercise the code paths that call browseCacheParentAndName and insert into BrowseCache and BrowseSystemCache), then query BrowseSystemCache to verify per-system DirPath/FileCount rows match expected counts and that BrowseCache contains global counts; also add tests for sqlInvalidateBrowseCache that cover both scoped deletion (passing a SystemDBID clears only BrowseSystemCache rows for that system) and full deletion (no SystemDBID clears both BrowseCache and BrowseSystemCache). Ensure tests reference browseSystemCacheKey, BrowseCache, BrowseSystemCache, browseCacheParentAndName and sqlInvalidateBrowseCache to target the new behaviors.
🧹 Nitpick comments (1)
pkg/api/middleware/ratelimit_test.go (1)
257-260: Avoid fixedSleepin timeout assertion to reduce flakiness.The
time.Sleep(50 * time.Millisecond)gate can be scheduler-sensitive. Preferassert.Never/require.Eventuallystyle polling for deterministic timing checks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/middleware/ratelimit_test.go` around lines 257 - 260, Replace the fixed time.Sleep(50 * time.Millisecond) check with a polling-style assertion so the test isn't scheduler-sensitive: after calling conn.WriteMessage(websocket.TextMessage, []byte("again")), use assert.Never (or require.Eventually in reverse) to poll handlerCalls.Load() for a short duration (e.g., 100–200ms) and assert it remains equal to int32(1); reference the existing conn.WriteMessage and handlerCalls.Load() calls and swap the Sleep-based gate for the polling-style assertion to make the timing deterministic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/api/middleware/ratelimit.go`:
- Around line 172-182: The logged wait_timeout is computed after rl.Wait(ctx)
fails so it is usually ~0/negative; change the code in the ratelimit middleware
to capture the intended wait duration before calling rl.Wait(ctx) (e.g., read
the rate limiter's configured wait timeout or compute time.Until(deadline) and
store it in a variable like waitTimeoutValuePrior) and use that stored value in
the log when rl.Wait returns an error; reference rl.Wait(ctx), waitTimeoutValue
(or the rate limiter config field) and the log.Warn() call to locate and update
the code.
In `@pkg/database/mediadb/sql_browse_cache.go`:
- Around line 124-129: The row insert is storing name "." for the root path
because browseCacheParentAndName("/") returns "."; before calling
systemStmt.ExecContext (in the loop over systemDirCounts), detect if key.dirPath
== "/" and override the returned name to a proper root label (e.g., "/") and/or
set an appropriate parentPath for root, then pass those corrected values into
the ExecContext call for BrowseSystemCache insertion so the cached Name for root
is valid.
---
Outside diff comments:
In `@pkg/database/mediadb/sql_browse_cache.go`:
- Around line 42-138: Add unit tests that assert the system-scoped browse cache
behavior: create fixtures inserting Media rows with different SystemDBID values
and virtual ("scheme://") and filesystem paths, run the code that populates
dirCounts/systemDirCounts (exercise the code paths that call
browseCacheParentAndName and insert into BrowseCache and BrowseSystemCache),
then query BrowseSystemCache to verify per-system DirPath/FileCount rows match
expected counts and that BrowseCache contains global counts; also add tests for
sqlInvalidateBrowseCache that cover both scoped deletion (passing a SystemDBID
clears only BrowseSystemCache rows for that system) and full deletion (no
SystemDBID clears both BrowseCache and BrowseSystemCache). Ensure tests
reference browseSystemCacheKey, BrowseCache, BrowseSystemCache,
browseCacheParentAndName and sqlInvalidateBrowseCache to target the new
behaviors.
---
Nitpick comments:
In `@pkg/api/middleware/ratelimit_test.go`:
- Around line 257-260: Replace the fixed time.Sleep(50 * time.Millisecond) check
with a polling-style assertion so the test isn't scheduler-sensitive: after
calling conn.WriteMessage(websocket.TextMessage, []byte("again")), use
assert.Never (or require.Eventually in reverse) to poll handlerCalls.Load() for
a short duration (e.g., 100–200ms) and assert it remains equal to int32(1);
reference the existing conn.WriteMessage and handlerCalls.Load() calls and swap
the Sleep-based gate for the polling-style assertion to make the timing
deterministic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 254875aa-f244-43f7-b1d3-aa0d4ff23a13
📒 Files selected for processing (4)
pkg/api/middleware/ratelimit.gopkg/api/middleware/ratelimit_test.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/sql_browse_cache.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/database/mediadb/mediadb.go
| waitTimeoutValue := time.Duration(0) | ||
| if deadline, ok := ctx.Deadline(); ok { | ||
| waitTimeoutValue = time.Until(deadline) | ||
| } | ||
|
|
||
| log.Warn(). | ||
| Err(err). | ||
| Str("ip", host). | ||
| Int("msg_size", len(msg)). | ||
| Msg("WebSocket rate limit exceeded, closing connection") | ||
| Str("wait_timeout", waitTimeoutValue.String()). | ||
| Msg("WebSocket rate limit wait failed, closing connection") |
There was a problem hiding this comment.
wait_timeout log value is misleading on timeout path.
At Line 172-Line 175, waitTimeoutValue is derived from time.Until(deadline) after rl.Wait(ctx) fails, so it is usually ~0 or negative for deadline expiry. This obscures the configured timeout in logs.
Proposed fix
- waitTimeoutValue := time.Duration(0)
- if deadline, ok := ctx.Deadline(); ok {
- waitTimeoutValue = time.Until(deadline)
- }
-
log.Warn().
Err(err).
Str("ip", host).
Int("msg_size", len(msg)).
- Str("wait_timeout", waitTimeoutValue.String()).
+ Dur("wait_timeout", waitTimeout).
Msg("WebSocket rate limit wait failed, closing connection")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/api/middleware/ratelimit.go` around lines 172 - 182, The logged
wait_timeout is computed after rl.Wait(ctx) fails so it is usually ~0/negative;
change the code in the ratelimit middleware to capture the intended wait
duration before calling rl.Wait(ctx) (e.g., read the rate limiter's configured
wait timeout or compute time.Until(deadline) and store it in a variable like
waitTimeoutValuePrior) and use that stored value in the log when rl.Wait returns
an error; reference rl.Wait(ctx), waitTimeoutValue (or the rate limiter config
field) and the log.Warn() call to locate and update the code.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/api/methods/media_browse_test.go`:
- Around line 129-132: The test mixes OS-specific romsRoot (from
browseTestAbsPath("roms")) with hardcoded POSIX prefixes like "/roms/" causing
matcher mismatches on Windows; compute a consistent prefix from romsRoot (e.g.
romsPrefix := filepath.ToSlash(romsRoot) + "/") and replace any hardcoded
"/roms/" occurrences used in mock expectations or matchers (e.g., places
referenced around the mockPlatform expectations and assertions at the former
lines ~333 and ~378) to use romsPrefix, and ensure any test path construction
uses filepath.Join instead of literal POSIX strings.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 94cec25f-f29d-41be-baa5-666edd8c17c6
📒 Files selected for processing (8)
pkg/api/methods/media_browse.gopkg/api/methods/media_browse_test.gopkg/api/middleware/ratelimit.gopkg/api/middleware/ratelimit_test.gopkg/database/mediadb/mediadb_integration_test.gopkg/database/mediadb/sql_browse.gopkg/database/mediadb/sql_browse_cache.gopkg/database/mediadb/sql_browse_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/database/mediadb/mediadb_integration_test.go
- pkg/database/mediadb/sql_browse.go
| romsRoot := browseTestAbsPath("roms") | ||
| mockPlatform.On("SupportedReaders", mock.Anything).Return(nil) | ||
| mockPlatform.On("RootDirs", mock.AnythingOfType("*config.Instance")). | ||
| Return([]string{"/roms"}) | ||
| Return([]string{romsRoot}) |
There was a problem hiding this comment.
Inconsistent path style between romsRoot and hardcoded mock expectations.
Tests use browseTestAbsPath("roms") to derive romsRoot, but some mock expectations still use hardcoded POSIX-style paths like "/roms/" (e.g., lines 333, 378). On Windows, browseTestAbsPath("roms") produces a path like C:\roms, causing a mismatch with the hardcoded "/roms/" prefix in matchers.
Consider deriving the prefix from romsRoot consistently:
romsPrefix := filepath.ToSlash(romsRoot) + "/"As per coding guidelines: "Use filepath.Join for path construction everywhere, including test files — never hardcode POSIX-style paths like '/roms/snes/game.sfc' as string literals".
Also applies to: 333-333, 378-378
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/api/methods/media_browse_test.go` around lines 129 - 132, The test mixes
OS-specific romsRoot (from browseTestAbsPath("roms")) with hardcoded POSIX
prefixes like "/roms/" causing matcher mismatches on Windows; compute a
consistent prefix from romsRoot (e.g. romsPrefix := filepath.ToSlash(romsRoot) +
"/") and replace any hardcoded "/roms/" occurrences used in mock expectations or
matchers (e.g., places referenced around the mockPlatform expectations and
assertions at the former lines ~333 and ~378) to use romsPrefix, and ensure any
test path construction uses filepath.Join instead of literal POSIX strings.
Summary
systemsfiltering tomedia.browse, including populated system route discovery andsystemIdsmetadata for route/directory entries.Summary by CodeRabbit
New Features
Behavior Changes
Documentation
Tests