Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 3 minutes and 37 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds end-to-end Solidity contract verification: backend APIs and solc integration for compiling and bytecode-matching, proxy detection enhancements, and frontend UI/hooks/constants/utilities to submit verification requests, view verification results, and decode transaction input data. Changes
Sequence Diagram(s)sequenceDiagram
%% Components: User, Frontend, Backend API, Database, solc, Blockchain RPC
actor User
participant Frontend as Frontend (UI)
participant Backend as Backend API
participant DB as Database
participant Solc as solc Compiler
participant RPC as Blockchain RPC
User->>Frontend: Submit verification form
Frontend->>Backend: POST /api/contracts/{address}/verify
Backend->>RPC: eth_getCode(address)
RPC-->>Backend: deployed bytecode
Backend->>Backend: download/cache solc binary (solc_cache_dir)
Backend->>Solc: compile source (standard-json)
Solc-->>Backend: compiled bytecode + ABI
Backend->>Backend: strip CBOR metadata & normalize immutable refs
Backend->>Backend: compare normalized bytecodes
alt match
Backend->>DB: upsert contract_abis (verification metadata)
DB-->>Backend: success
Backend-->>Frontend: 200 VerifyResponse (verified=true, abi)
else mismatch
Backend-->>Frontend: 400 BytecodeMismatch error
end
Frontend-->>User: show result (verified UI or error)
sequenceDiagram
actor User
participant Frontend as Frontend (Page)
participant Backend as Backend API
participant DB as Database
participant RPC as Blockchain RPC
User->>Frontend: Open address page
Frontend->>Backend: GET /api/contracts/{address}
Backend->>DB: query contract_abis
alt record found
DB-->>Backend: contract metadata
Backend-->>Frontend: ContractDetail (verified=true)
else not found
Backend->>RPC: eth_getCode(address)
RPC-->>Backend: bytecode
Backend-->>Frontend: ContractDetail (verified=false)
end
Frontend->>User: render contract tab or verification form
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Adds a Single file / Multi-file toggle to the verification form. In multi-file mode, users select .sol files directly via a native file picker instead of manually typing filenames and pasting content. Files are read with FileReader and submitted as source_files to the existing API endpoint.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/nginx.conf (1)
19-20:⚠️ Potential issue | 🟡 MinorResolve service naming inconsistency between guidelines and infrastructure.
The nginx config proxies to
atlas-server:3000(lines 20, 39), which matches the actual service name indocker-compose.yml. However, this contradicts the coding guideline requiringatlas-api:3000. Either update the guideline to reflect the actual service nameatlas-server, or rename the service toatlas-apito comply with standards. Verify with the infrastructure team which naming convention should be enforced.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/nginx.conf` around lines 19 - 20, The nginx location block for /api/events currently proxy_pass to atlas-server:3000 but your coding guideline requires atlas-api:3000; either update the nginx proxy_pass to use atlas-api:3000 or rename the backend service from atlas-server to atlas-api so names match the guideline; adjust any other proxy_pass entries (e.g., the other occurrence at line ~39) and update service references (container/service name) consistently across the compose/deployment to avoid mismatch, then confirm with infra that atlas-api is the canonical name before committing.
🧹 Nitpick comments (8)
docker-compose.yml (1)
17-19: Make the amd64 pin override-able for non-Apple/ARM setups.
Hard-pinning platform is a useful workaround, but making it env-configurable avoids unnecessary emulation constraints in other local/dev environments.Proposed tweak
- platform: linux/amd64 + platform: ${ATLAS_SERVER_PLATFORM:-linux/amd64}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 17 - 19, Replace the hard-coded platform: linux/amd64 entry with an environment-configurable value so non-ARM developers can override it; update the docker-compose service entry that currently contains "platform: linux/amd64" to reference an env var (e.g., PLATFORM) with a sensible default, and document adding PLATFORM to .env or export it locally (or remove it) for native amd64/arm setups so atlas-server (the service with the platform key) won't force emulation unnecessarily.frontend/src/constants/contractVerification.ts (1)
1-105: Consider automatingSOLC_VERSION_OPTIONSgeneration to prevent staleness.
The static snapshot will drift; a generated artifact (scripted at build/release time) would keep the selectable versions current with less manual churn.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/constants/contractVerification.ts` around lines 1 - 105, The SOLC_VERSION_OPTIONS constant is a static snapshot that will become stale; replace it with a generated artifact produced at build/release time: add a small Node script (e.g., scripts/generateSolcVersions.ts) that fetches the Solidity binaries index (https://binaries.soliditylang.org/*/list.json), extracts the compiler version strings, sorts/filters them as needed, and writes frontend/src/constants/contractVerification.ts exporting SOLC_VERSION_OPTIONS (preserving the "as const" typing); wire this script into package.json (prebuild/prepublish) so the file is regenerated automatically and add simple caching/fallback logic in the script to avoid build failures when the remote is unavailable.frontend/src/pages/AddressPage.tsx (1)
175-187: Consider expanding proxy type labels.The proxy banner provides good UX but only formats
eip1967andeip1822with human-readable labels. Thetransparentandcustomtypes fall back to raw strings.♻️ Optional: add labels for all proxy types
- This is a {proxyInfo.proxy_type === 'eip1967' ? 'EIP-1967' : proxyInfo.proxy_type === 'eip1822' ? 'EIP-1822 (UUPS)' : proxyInfo.proxy_type} proxy. + This is a { + proxyInfo.proxy_type === 'eip1967' ? 'EIP-1967' : + proxyInfo.proxy_type === 'eip1822' ? 'EIP-1822 (UUPS)' : + proxyInfo.proxy_type === 'transparent' ? 'Transparent' : + proxyInfo.proxy_type === 'custom' ? 'Custom' : + proxyInfo.proxy_type + } proxy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/AddressPage.tsx` around lines 175 - 187, The proxy banner in AddressPage.tsx shows human-friendly labels only for 'eip1967' and 'eip1822' while leaving 'transparent' and 'custom' as raw strings; update the rendering to map all known proxy types to readable labels (for example: 'eip1967' -> 'EIP-1967', 'eip1822' -> 'EIP-1822 (UUPS)', 'transparent' -> 'Transparent Proxy', 'custom' -> 'Custom Proxy') by introducing a small helper or lookup (e.g., getProxyLabel(proxyInfo.proxy_type) or a PROXY_LABELS map) and use that helper where proxyInfo.proxy_type is currently inspected in the JSX so the banner always displays a clear, user-friendly proxy type.backend/crates/atlas-server/src/api/handlers/proxy.rs (2)
37-40: Inefficient: creates a new HTTP client per RPC call.A new
reqwest::Clientis instantiated on everyread_address_slotinvocation. Sincereqwest::Clientmanages connection pooling internally, reusing a single client is more efficient. Consider adding a shared client toAppStateor creating one at the start ofresolve_proxy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/proxy.rs` around lines 37 - 40, The code creates a new reqwest::Client inside read_address_slot/resolve_proxy for every RPC call; instead add a shared Client instance to your application state and reuse it: add a reqwest::Client (or Arc<reqwest::Client>) field to AppState, construct it once with the desired timeout when AppState is built, and then replace the per-call Client::builder() usage in resolve_proxy/read_address_slot with a clone/reference to AppState's client; ensure any call-sites use the shared client and remove the per-request build to enable connection pooling and reduce allocations.
328-345: UsesCOUNT(*)andOFFSETpagination forproxy_contracts.The
list_proxieshandler usesCOUNT(*)for total count andOFFSETfor pagination. As per coding guidelines, large tables should usepg_class.reltuplesfor row count estimation, and pagination should use cursor-based keyset pagination instead ofOFFSETto avoid performance degradation as the table grows.As per coding guidelines: "For large tables (transactions, addresses), use pg_class.reltuples via get_table_count(pool, table_name) instead of COUNT(*)" and "Never use OFFSET for large tables in SQL queries; use keyset/cursor pagination instead."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/proxy.rs` around lines 328 - 345, The handler list_proxies currently uses COUNT(*) and OFFSET which don't scale; replace the COUNT(*) call with get_table_count(&state.pool, "proxy_contracts") to obtain the estimated total, and switch the SQL query to keyset pagination: add cursor parameters to the request (e.g., last_detected_at_block and last_proxy_address or a single encoded cursor) and change the WHERE/ORDER clause in the query used in list_proxies to filter rows earlier than the cursor (e.g., WHERE (detected_at_block, proxy_address) < ($cursor_block, $cursor_addr) ORDER BY detected_at_block DESC, proxy_address DESC LIMIT $1) and bind the cursor values instead of using OFFSET; keep ordering deterministic by including proxy_address as a tie-breaker and ensure the Pagination type (or handler parameters) is updated to accept/produce the cursor and limit.frontend/src/hooks/useContract.ts (1)
28-29: Unsafe error type casting.The caught error is cast to
ApiErrorwithout validation. IfgetContractDetailthrows an unexpected error (e.g., a network error with a different shape), theerrorstate may not conform to theApiErrorinterface.Proposed fix to validate error shape
} catch (err) { - setError(err as ApiError); + const apiErr = err as { error?: string; status?: number }; + setError({ error: apiErr.error ?? 'Unknown error', status: apiErr.status }); } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/useContract.ts` around lines 28 - 29, The catch block in useContract is unsafely casting all thrown errors to ApiError; update the catch in the getContractDetail call to validate the error shape before calling setError. Implement or use a small type-guard (e.g., isApiError) that checks required ApiError fields and, if it passes, call setError(err); otherwise setError with a normalized fallback object (e.g., { message: String(err), code: 'UNKNOWN' }) so setError always receives a predictable ApiError-shaped value; update references to getContractDetail, setError, and ApiError when adding the guard and fallback.frontend/src/api/contracts.ts (1)
9-10: Unhandled JSON parse error on success response.If the response is
okbut the body contains malformed JSON,res.json()will throw an uncaught error. Consider wrapping with error handling similar to the error path.Proposed fix
- return res.json(); + return res.json().catch(() => { + throw { error: 'Invalid response from server' }; + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/api/contracts.ts` around lines 9 - 10, The success path currently calls res.json() directly which will throw if the body is malformed; update the function containing the response handling (the code that uses the res variable and calls res.json()) to parse the body inside a try/catch, i.e., attempt await res.json() in a try block and on failure handle it the same way as the error path (log/throw a clear Parse error or return a safe fallback), so replace the direct return res.json() with a guarded parse using res.json() inside a try/catch and surface a descriptive error or fallback value.frontend/src/utils/abiDecode.ts (1)
119-122: Add test case to verify custom Keccak-256 implementation against known vectors.This custom implementation is used for 4-byte function selector matching and is not cryptographically sensitive. However, since there are no existing tests for the
keccak256orkeccak256Hexfunctions, add a test to verify correctness against a known vector—for example,keccak256("transfer(address,uint256)")should produce0xa9059cbb...(the standard ERC-20 transfer selector).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/abiDecode.ts` around lines 119 - 122, Add a unit test that verifies the custom keccak256 and keccak256Hex functions return the expected values for the known vector "transfer(address,uint256)"; call keccak256Hex("transfer(address,uint256)") and assert it equals the canonical hex (starting with "0xa9059cbb...") and also assert the 4-byte selector (first 10 chars including 0x) equals "0xa9059cbb" to ensure selector matching works; reference the keccak256 and keccak256Hex functions (and bytesToHex if used) in the test and fail the test if results differ from the known vector.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/Cargo.toml`:
- Line 32: The dev-dependency entry for "tempfile" in the [dev-dependencies]
section is pinned to "3" and must be unified with the workspace-specified
source; update the "tempfile" entry under [dev-dependencies] to use workspace =
true (matching the [dependencies] entry style) so the crate uses the
workspace-managed version instead of a hardcoded version.
In `@backend/crates/atlas-server/src/api/handlers/contracts.rs`:
- Around line 197-205: The code currently both pre-checks existence via the
already_verified Option<(String,)> query against contract_abis and also performs
an INSERT with an ON CONFLICT DO UPDATE later, which conflicts in intent; decide
desired behavior and make the code consistent: if re-verification should be
rejected, remove the INSERT's ON CONFLICT DO UPDATE clause so the insert will
fail and keep the already_verified early-return; if re-verification should
update the record, remove the entire already_verified query and early return so
the later INSERT ... ON CONFLICT DO UPDATE handles updates; update the code
paths around the already_verified variable and the INSERT statement accordingly
to reflect the chosen behavior.
- Around line 445-448: The call to child.wait_with_output() can hang
indefinitely; wrap the await in tokio::time::timeout with a sensible Duration
(e.g., a few seconds) when waiting for the solc child process (the variable
child and its wait_with_output() call), map a timeout error to an appropriate
AtlasError (instead of hanging) and include context like "solc timed out" in the
error message, while preserving the existing mapping for other errors (currently
using AtlasError::Internal(format!(...))). Ensure the resulting output variable
is obtained only on success of the timeout and handle cancellation/kill of the
child process on timeout.
In `@frontend/src/components/ContractTab.tsx`:
- Around line 146-148: The code currently uses parseInt(optimizationRunsValue,
10) || 200 which treats 0 as falsy and silently falls back to 200; change the
logic so when optimizationEnabled is true you explicitly parse the value (const
parsed = parseInt(optimizationRunsValue, 10)) and then: if optimizationRunsValue
is an empty string use the intended default (200), else if Number.isNaN(parsed)
set optimization_runs to undefined (so validation can surface the error) else
set optimization_runs to parsed (which preserves 0). Update the assignment of
optimization_runs accordingly where optimizationEnabled and
optimizationRunsValue are referenced.
In `@frontend/src/hooks/useContract.ts`:
- Around line 17-21: When address becomes falsy the fetchContract callback
currently only clears loading which can leave stale contract/error state; update
fetchContract (and any place that handles the address-change branch) to also
call setContract(undefined) and setError(undefined) when !address, and ensure
any in-flight fetch is cancelled/ignored so old results don't repopulate
state—refer to fetchContract, setContract, setError, setLoading and address to
locate the logic to update.
In `@frontend/src/utils/abiDecode.ts`:
- Around line 90-93: The int decoding branch currently treats all /u?int(\d*)/
as unsigned; update it to detect signed ints (e.g., match /^int(\d*)$/ or check
the captured group and the presence/absence of 'u') and for signed types compute
the value using two's complement: parse raw = BigInt('0x' + wordHex), determine
bits = capturedBits || 256, if the sign bit (raw & (1n << (bits-1))) is set then
value = raw - (1n << BigInt(bits)) else value = raw, and return
value.toString(10) with size: 32; keep the existing unsigned path unchanged for
'uint...' cases.
---
Outside diff comments:
In `@frontend/nginx.conf`:
- Around line 19-20: The nginx location block for /api/events currently
proxy_pass to atlas-server:3000 but your coding guideline requires
atlas-api:3000; either update the nginx proxy_pass to use atlas-api:3000 or
rename the backend service from atlas-server to atlas-api so names match the
guideline; adjust any other proxy_pass entries (e.g., the other occurrence at
line ~39) and update service references (container/service name) consistently
across the compose/deployment to avoid mismatch, then confirm with infra that
atlas-api is the canonical name before committing.
---
Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/proxy.rs`:
- Around line 37-40: The code creates a new reqwest::Client inside
read_address_slot/resolve_proxy for every RPC call; instead add a shared Client
instance to your application state and reuse it: add a reqwest::Client (or
Arc<reqwest::Client>) field to AppState, construct it once with the desired
timeout when AppState is built, and then replace the per-call Client::builder()
usage in resolve_proxy/read_address_slot with a clone/reference to AppState's
client; ensure any call-sites use the shared client and remove the per-request
build to enable connection pooling and reduce allocations.
- Around line 328-345: The handler list_proxies currently uses COUNT(*) and
OFFSET which don't scale; replace the COUNT(*) call with
get_table_count(&state.pool, "proxy_contracts") to obtain the estimated total,
and switch the SQL query to keyset pagination: add cursor parameters to the
request (e.g., last_detected_at_block and last_proxy_address or a single encoded
cursor) and change the WHERE/ORDER clause in the query used in list_proxies to
filter rows earlier than the cursor (e.g., WHERE (detected_at_block,
proxy_address) < ($cursor_block, $cursor_addr) ORDER BY detected_at_block DESC,
proxy_address DESC LIMIT $1) and bind the cursor values instead of using OFFSET;
keep ordering deterministic by including proxy_address as a tie-breaker and
ensure the Pagination type (or handler parameters) is updated to accept/produce
the cursor and limit.
In `@docker-compose.yml`:
- Around line 17-19: Replace the hard-coded platform: linux/amd64 entry with an
environment-configurable value so non-ARM developers can override it; update the
docker-compose service entry that currently contains "platform: linux/amd64" to
reference an env var (e.g., PLATFORM) with a sensible default, and document
adding PLATFORM to .env or export it locally (or remove it) for native amd64/arm
setups so atlas-server (the service with the platform key) won't force emulation
unnecessarily.
In `@frontend/src/api/contracts.ts`:
- Around line 9-10: The success path currently calls res.json() directly which
will throw if the body is malformed; update the function containing the response
handling (the code that uses the res variable and calls res.json()) to parse the
body inside a try/catch, i.e., attempt await res.json() in a try block and on
failure handle it the same way as the error path (log/throw a clear Parse error
or return a safe fallback), so replace the direct return res.json() with a
guarded parse using res.json() inside a try/catch and surface a descriptive
error or fallback value.
In `@frontend/src/constants/contractVerification.ts`:
- Around line 1-105: The SOLC_VERSION_OPTIONS constant is a static snapshot that
will become stale; replace it with a generated artifact produced at
build/release time: add a small Node script (e.g.,
scripts/generateSolcVersions.ts) that fetches the Solidity binaries index
(https://binaries.soliditylang.org/*/list.json), extracts the compiler version
strings, sorts/filters them as needed, and writes
frontend/src/constants/contractVerification.ts exporting SOLC_VERSION_OPTIONS
(preserving the "as const" typing); wire this script into package.json
(prebuild/prepublish) so the file is regenerated automatically and add simple
caching/fallback logic in the script to avoid build failures when the remote is
unavailable.
In `@frontend/src/hooks/useContract.ts`:
- Around line 28-29: The catch block in useContract is unsafely casting all
thrown errors to ApiError; update the catch in the getContractDetail call to
validate the error shape before calling setError. Implement or use a small
type-guard (e.g., isApiError) that checks required ApiError fields and, if it
passes, call setError(err); otherwise setError with a normalized fallback object
(e.g., { message: String(err), code: 'UNKNOWN' }) so setError always receives a
predictable ApiError-shaped value; update references to getContractDetail,
setError, and ApiError when adding the guard and fallback.
In `@frontend/src/pages/AddressPage.tsx`:
- Around line 175-187: The proxy banner in AddressPage.tsx shows human-friendly
labels only for 'eip1967' and 'eip1822' while leaving 'transparent' and 'custom'
as raw strings; update the rendering to map all known proxy types to readable
labels (for example: 'eip1967' -> 'EIP-1967', 'eip1822' -> 'EIP-1822 (UUPS)',
'transparent' -> 'Transparent Proxy', 'custom' -> 'Custom Proxy') by introducing
a small helper or lookup (e.g., getProxyLabel(proxyInfo.proxy_type) or a
PROXY_LABELS map) and use that helper where proxyInfo.proxy_type is currently
inspected in the JSX so the banner always displays a clear, user-friendly proxy
type.
In `@frontend/src/utils/abiDecode.ts`:
- Around line 119-122: Add a unit test that verifies the custom keccak256 and
keccak256Hex functions return the expected values for the known vector
"transfer(address,uint256)"; call keccak256Hex("transfer(address,uint256)") and
assert it equals the canonical hex (starting with "0xa9059cbb...") and also
assert the 4-byte selector (first 10 chars including 0x) equals "0xa9059cbb" to
ensure selector matching works; reference the keccak256 and keccak256Hex
functions (and bytesToHex if used) in the test and fail the test if results
differ from the known vector.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2b5407aa-8af8-4e16-9071-11760dd5ea7b
📒 Files selected for processing (31)
backend/Cargo.tomlbackend/Dockerfilebackend/crates/atlas-common/src/types.rsbackend/crates/atlas-server/Cargo.tomlbackend/crates/atlas-server/src/api/handlers/contracts.rsbackend/crates/atlas-server/src/api/handlers/faucet.rsbackend/crates/atlas-server/src/api/handlers/health.rsbackend/crates/atlas-server/src/api/handlers/metrics.rsbackend/crates/atlas-server/src/api/handlers/mod.rsbackend/crates/atlas-server/src/api/handlers/proxy.rsbackend/crates/atlas-server/src/api/handlers/status.rsbackend/crates/atlas-server/src/api/mod.rsbackend/crates/atlas-server/src/cli.rsbackend/crates/atlas-server/src/config.rsbackend/crates/atlas-server/src/main.rsbackend/crates/atlas-server/tests/integration/common.rsdocker-compose.ymlfrontend/nginx.conffrontend/src/api/contracts.tsfrontend/src/api/index.tsfrontend/src/api/proxies.tsfrontend/src/components/ContractTab.tsxfrontend/src/components/index.tsfrontend/src/constants/contractVerification.tsfrontend/src/hooks/index.tsfrontend/src/hooks/useContract.tsfrontend/src/pages/AddressPage.tsxfrontend/src/pages/TransactionDetailPage.tsxfrontend/src/types/index.tsfrontend/src/utils/abiDecode.tsfrontend/src/utils/index.ts
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
frontend/src/utils/abiDecode.ts (1)
127-163: Consider using an established library for Keccak-256.While the custom implementation is acknowledged as non-cryptographically-sensitive for selector matching, using a well-tested library like
@noble/hashesorjs-sha3would reduce maintenance burden and risk of subtle bugs. These libraries are small, audited, and widely used.Example with
@noble/hashes:import { keccak_256 } from '@noble/hashes/sha3'; function keccak256Hex(input: string): string { const bytes = new TextEncoder().encode(input); return bytesToHex(keccak_256(bytes)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/abiDecode.ts` around lines 127 - 163, Replace the custom Keccak implementation with a well-tested library: remove or stop using the local keccak256, keccakF, readLane, writeLane (and related padding/absorb/squeeze) code and instead import a keccak function (e.g., keccak_256 from '@noble/hashes/sha3' or equivalent) and call it from keccak256Hex; ensure keccak256Hex continues to accept a string, encodes it with TextEncoder, and returns bytesToHex of the library hash output, and add the chosen library to dependencies.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/utils/abiDecode.ts`:
- Around line 39-44: When mapping inputs to args in the args creation block (the
inputs.map callback that builds name/type/value), guard against short or missing
calldata by extracting the segment with calldata.slice(offset, offsetEnd), then
pad it to 64 characters (e.g., segment = segment.padEnd(64, '0')) before
prefixing with "0x", so values are never "0x" or truncated; update the value
generation in that mapping (the code constructing value: `0x${...}`) to use the
padded segment and ensure consistent 32-byte hex output for each input.
- Around line 90-105: The signed integer branch in abiDecode (the intMatch
block) fails for sub-256-bit types because it operates on the full 256-bit word
without masking to the declared width; modify the logic in that block to compute
a mask = (1n << BigInt(bits)) - 1n, apply value = value & mask before checking
the signBit and performing two's-complement subtraction, and then return the
decimal string of the adjusted value; reference variables: type, intMatch,
isUnsigned, bits, value, signBit.
---
Nitpick comments:
In `@frontend/src/utils/abiDecode.ts`:
- Around line 127-163: Replace the custom Keccak implementation with a
well-tested library: remove or stop using the local keccak256, keccakF,
readLane, writeLane (and related padding/absorb/squeeze) code and instead import
a keccak function (e.g., keccak_256 from '@noble/hashes/sha3' or equivalent) and
call it from keccak256Hex; ensure keccak256Hex continues to accept a string,
encodes it with TextEncoder, and returns bytesToHex of the library hash output,
and add the chosen library to dependencies.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f4eb0d74-9ad0-496e-bd09-aa51776ba835
📒 Files selected for processing (5)
backend/crates/atlas-server/Cargo.tomlbackend/crates/atlas-server/src/api/handlers/contracts.rsfrontend/src/components/ContractTab.tsxfrontend/src/hooks/useContract.tsfrontend/src/utils/abiDecode.ts
✅ Files skipped from review due to trivial changes (1)
- backend/crates/atlas-server/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (3)
- frontend/src/hooks/useContract.ts
- frontend/src/components/ContractTab.tsx
- backend/crates/atlas-server/src/api/handlers/contracts.rs
Summary
Screenshots:
Summary by CodeRabbit
New Features
Improvements
Chores