refactor(cache): replace unstable_cache with createCachedFetch and redis#2479
refactor(cache): replace unstable_cache with createCachedFetch and redis#2479kilo-code-bot[bot] wants to merge 2 commits intomainfrom
Conversation
| return []; | ||
| } | ||
| console.debug( | ||
| `[getModelUserByokProviders_cached] found user byok providers for ${modelId}`, |
| 300_000, | ||
| [] | ||
| ); | ||
| return get(); |
| .map(ep => VercelUserByokInferenceProviderIdSchema.safeParse(ep.tag).data) | ||
| .filter(providerId => providerId !== undefined) ?? []; | ||
| if (providers.length === 0) { | ||
| console.debug(`[getModelUserByokProviders_cached] no user byok providers for ${modelId}`); |
| return []; | ||
| } | ||
| console.debug( | ||
| `[getModelUserByokProviders_cached] found user byok providers for ${modelId}`, |
| ttlMs: number, | ||
| defaultValue: T | ||
| ) { | ||
| return createCachedFetch( |
There was a problem hiding this comment.
WARNING: This helper now swallows fetch failures and returns the default value
createCachedFetch catches any exception from the inner callback, so a Redis timeout, upstream fetch error, or DB error now resolves to defaultValue instead of propagating. That changes callers like openrouter/providers and listSupportedModels from returning an error to silently serving []/{} on a cold cache, which is a behavior regression from unstable_cache.
| return existing as () => Promise<T>; | ||
| } | ||
| const cached = createRedisCachedFetch(redisKey, fetcher, ttlMs, defaultValue); | ||
| redisCacheRegistry.set(redisKey, cached); |
There was a problem hiding this comment.
WARNING: The shared registry grows without bound for parameterized keys
getOrCreateRedisCachedFetch never removes entries from redisCacheRegistry, so every unique user id, model id, or PostHog query hash stays resident for the life of the process. In this PR the helper is used with per-user and per-query keys, so a long-lived server can accumulate an unbounded number of closures even after their TTL has expired.
Code Review SummaryStatus: No Issues Found | Recommendation: Merge Files Reviewed (7 files)
Reviewed by gpt-5.4-20260305 · 147,724 tokens |
| return getOrCreateRedisCachedFetch( | ||
| `openrouter:direct-byok-models:${userId}`, | ||
| () => getDirectByokModelsForUser(userId), | ||
| 60_000 | ||
| )(); |
| }, | ||
| 86_400_000 | ||
| ); | ||
| return get(); |
| * (both Redis and the fetcher are unavailable) so callers can decide | ||
| * whether to fail open or handle the error explicitly. | ||
| */ | ||
| export function createRedisCachedFetch<T>( |
There was a problem hiding this comment.
this feels a bit tricky with more than one instance. we probably want to at least add jitter to the ttlMs right?
Summary
Replaced all 6 remaining uses of
unstable_cacheacross the web app withcreateCachedFetchcombined withredisGet/redisSet.createRedisCachedFetchhelper tocached-fetch.tsfor parameterless caches: checks Redis, falls back to the fetcher, and writes back to Redis only on a fresh fetch.getOrCreateRedisCachedFetchfor parameterized caches (e.g. per-model or per-user keys) so in-process caching is reused across calls.byok-router.ts,providers/vercel/index.ts,posthog-query.ts,byok/index.ts,api/openrouter/providers/route.ts, andapi/openrouter/models/route.ts.Verification
pnpm typecheckpassespnpm lintpassespnpm formatrunVisual Changes
N/A
Reviewer Notes
Redis keys introduced:
byok:supported-models(5 min TTL)vercel:models(1 hour TTL)posthog-query:<name>:<hash>(24 hour TTL)byok:providers:<modelId>(5 min TTL)openrouter:providers(24 hour TTL)openrouter:direct-byok-models:<userId>(1 min TTL)