-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: file upload (minio, s3, frontend...) #443
base: master
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
0305a22
to
2ec53e9
Compare
TODO :
|
bfda1a2
to
88c14de
Compare
Here is the idea for the refactor I did in commit : refactor(file-upload): Remove hook abstraction layer and regroup feature's files |
4352848
to
48a85ea
Compare
07a8868
to
baf7bce
Compare
07aeb97
to
e1fa6d3
Compare
3fd5239
to
36922cb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
src/lib/s3/client.ts (1)
42-47
: 🛠️ Refactor suggestionAdd URL sanitization for public file URLs
The current implementation directly concatenates the key with the public URL without sanitization, which could lead to URL injection issues.
export const getFilePublicUrl = (key: string | null | undefined) => { if (!key) { return undefined; } - return `${env.NEXT_PUBLIC_S3_BUCKET_PUBLIC_URL}/${key}`; + const sanitizedKey = encodeURIComponent(key.replace(/^\/+/, '')); + return `${env.NEXT_PUBLIC_S3_BUCKET_PUBLIC_URL}/${sanitizedKey}`; };docker-compose.yml (1)
38-38
:⚠️ Potential issueFix inconsistent environment variable naming.
The command uses
FORWARD_MINIO_CONSOLE_PORT
while the port mapping above usesDOCKER_FORWARD_MINIO_CONSOLE_PORT
. This inconsistency might lead to unexpected behavior.Use a consistent variable name throughout:
- command: minio server /data/minio --console-address ":${FORWARD_MINIO_CONSOLE_PORT:-9090}" + command: minio server /data/minio --console-address ":${DOCKER_FORWARD_MINIO_CONSOLE_PORT:-9090}"src/lib/s3/schemas.ts (1)
45-53
: 🛠️ Refactor suggestionAdd security-focused validation rules.
The schema for upload signed URL input is currently quite permissive and lacks important security validations such as file size limits, type validation, and more strict metadata controls.
Enhance the validation schema:
export const zUploadSignedUrlInput = () => z.object({ /** * Must be a string as trpc-openapi requires that attributes must be serialized */ - metadata: z.string().optional(), + metadata: z.string().max(1024).optional(), // Limit metadata size - name: z.string(), + name: z.string().max(255).regex(/^[^/\\]*$/), // Prevent path traversal - type: z.string(), + type: z.string().regex(/^[a-zA-Z]+\/[-+.a-zA-Z0-9]+$/), // Valid MIME type format - size: z.number(), + size: z.number().min(1).max(5 * 1024 * 1024), // Limit file size (e.g., 5MB) collection: zFilesCollectionName(), });
🧹 Nitpick comments (6)
src/components/Form/FieldUpload/FieldUpload.spec.tsx (1)
12-14
: Fix typo in mock contentThere's a typo in the mock file content: "mock-contet" should be "mock-content".
-const mockFileRaw = new File(['mock-contet'], 'FileTest', { +const mockFileRaw = new File(['mock-content'], 'FileTest', { type: 'image/png', });README.md (1)
64-64
: Fix grammar in setup instructionThe sentence should start with a verb instead of a noun for better clarity.
-> Setup a PostgreSQL database (locally or online) and replace the **DATABASE_URL** environment variable. Then you can run `pnpm db:push` to update your database schema and then run `pnpm db:seed` to seed your database. +> Set up a PostgreSQL database (locally or online) and replace the **DATABASE_URL** environment variable. Then you can run `pnpm db:push` to update your database schema and then run `pnpm db:seed` to seed your database.🧰 Tools
🪛 LanguageTool
[grammar] ~64-~64: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ...TE] > Don't want to use docker? > > Setup a PostgreSQL database (locally or onlin...(SENT_START_NN_DT)
src/components/Form/FieldUpload/FieldUploadPreview.tsx (3)
7-39
: Enhance accessibility for the image preview component.The image preview lacks proper accessibility attributes which are important for screen readers and other assistive technologies.
Consider adding these accessibility improvements:
<Box position="relative" bg="gray.200" height="100%" aspectRatio="1" overflow="hidden" rounded="md" + role="img" + aria-label={`Preview of uploaded image`} > - <Box as="img" width="100%" height="100%" objectFit="cover" src={image} /> + <Box + as="img" + width="100%" + height="100%" + objectFit="cover" + src={image} + alt="" + />
66-75
: Improve error handling for file reading failures.The current implementation handles file reading errors using a simple reject without providing specific error information or user feedback.
Add more robust error handling:
const uploadedFileToPreview = await new Promise<string>( (resolve, reject) => { const reader = new FileReader(); reader.onloadend = () => resolve(reader.result?.toString() ?? ''); - reader.onerror = reject; + reader.onerror = (error) => { + console.error('Error reading file for preview:', error); + reject(new Error('Failed to read file for preview')); + }; if (value.file) { reader.readAsDataURL(value.file); } } );
52-56
: Add loading state during file preview generation.There's no loading state indicator while the file is being read, which might lead to a poor user experience for larger files.
Consider adding a loading state:
const [fileToPreview, setFileToPreview] = useState<string>(); +const [isLoading, setIsLoading] = useState(false); const value = watch(uploaderName); const previewFile = useCallback(async () => { + setIsLoading(true); if (!value || (!value.fileUrl && !value.file)) { setFileToPreview(undefined); + setIsLoading(false); return; } // ...rest of the function setFileToPreview(uploadedFileToPreview); + setIsLoading(false); }, [value]);src/lib/s3/schemas.ts (1)
28-38
: Add error logging for file validation failures.The current implementation adds an issue to the context but doesn't log the error, which could make debugging difficult.
Consider adding debug logging:
.superRefine((input, ctx) => { const config = FILES_COLLECTIONS_CONFIG[collection]; const validateFileReturn = validateFile({ input, config }); if (!validateFileReturn.success) { + console.debug(`File validation failed for collection ${collection}:`, validateFileReturn.error); ctx.addIssue({ code: z.ZodIssueCode.custom, message: t(`files.errors.${validateFileReturn.error.key}`), }); } });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (39)
.dockerignore
(1 hunks).env.example
(2 hunks).github/workflows/code-quality.yml
(0 hunks).github/workflows/e2e-tests.yml
(2 hunks).storybook/preview.tsx
(3 hunks)README.md
(2 hunks)docker-compose.yml
(1 hunks)docker/initialize-database.dockerfile
(1 hunks)e2e/avatar-upload.spec.ts
(1 hunks)package.json
(3 hunks)src/components/Form/FieldUpload/FieldUpload.spec.tsx
(1 hunks)src/components/Form/FieldUpload/FieldUploadPreview.tsx
(1 hunks)src/components/Form/FieldUpload/docs.stories.tsx
(1 hunks)src/components/Form/FieldUpload/index.tsx
(1 hunks)src/components/Form/FieldUpload/utils.ts
(1 hunks)src/components/Form/FormFieldController.tsx
(3 hunks)src/components/ImageUpload/docs.stories.tsx
(1 hunks)src/components/ImageUpload/index.tsx
(1 hunks)src/env.mjs
(3 hunks)src/features/account/AccountProfileForm.tsx
(4 hunks)src/features/account/schemas.ts
(2 hunks)src/features/admin/AdminNavBar.tsx
(2 hunks)src/features/app/AppNavBarDesktop.tsx
(3 hunks)src/features/users/PageAdminUsers.tsx
(1 hunks)src/features/users/schemas.ts
(2 hunks)src/lib/s3/client.ts
(1 hunks)src/lib/s3/config.ts
(1 hunks)src/lib/s3/schemas.ts
(1 hunks)src/lib/s3/utils.ts
(1 hunks)src/locales/ar/common.json
(1 hunks)src/locales/en/account.json
(2 hunks)src/locales/en/common.json
(1 hunks)src/locales/fr/account.json
(2 hunks)src/locales/fr/common.json
(1 hunks)src/locales/sw/common.json
(1 hunks)src/server/config/s3.ts
(1 hunks)src/server/router.ts
(2 hunks)src/server/routers/account.tsx
(4 hunks)src/server/routers/files.ts
(1 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/code-quality.yml
🚧 Files skipped from review as they are similar to previous changes (30)
- src/locales/fr/common.json
- docker/initialize-database.dockerfile
- src/server/router.ts
- src/locales/sw/common.json
- src/features/admin/AdminNavBar.tsx
- src/locales/en/common.json
- .dockerignore
- src/components/ImageUpload/index.tsx
- e2e/avatar-upload.spec.ts
- src/locales/ar/common.json
- .storybook/preview.tsx
- src/lib/s3/config.ts
- src/features/users/PageAdminUsers.tsx
- src/server/routers/files.ts
- src/locales/en/account.json
- src/components/Form/FieldUpload/utils.ts
- src/server/routers/account.tsx
- src/lib/s3/utils.ts
- .github/workflows/e2e-tests.yml
- src/features/users/schemas.ts
- .env.example
- src/features/app/AppNavBarDesktop.tsx
- src/features/account/schemas.ts
- src/components/ImageUpload/docs.stories.tsx
- src/locales/fr/account.json
- src/server/config/s3.ts
- package.json
- src/env.mjs
- src/components/Form/FieldUpload/docs.stories.tsx
- src/components/Form/FieldUpload/index.tsx
🧰 Additional context used
🪛 LanguageTool
README.md
[grammar] ~64-~64: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ...TE] > Don't want to use docker? > > Setup a PostgreSQL database (locally or onlin...
(SENT_START_NN_DT)
🔇 Additional comments (7)
src/components/Form/FormFieldController.tsx (1)
24-24
: Integration of file upload functionality looks good!The changes to add the new
upload
field type follow the established pattern in the codebase - import the component, add its props to the union type, and add the case to render it when needed.Also applies to: 53-53, 120-121
src/components/Form/FieldUpload/FieldUpload.spec.tsx (1)
25-54
: Add test cases for error scenariosThe test suite should include cases for invalid file types and upload failures to ensure robust error handling.
+test('rejects invalid file type', async () => { + const user = setupUser(); + const mockedSubmit = vi.fn(); + const invalidFile = new File(['content'], 'test.txt', { type: 'text/plain' }); + + render( + <FormMocked + schema={z.object({ file: zFieldUploadValue(['image']).optional() })} + useFormOptions={{ defaultValues: { file: undefined } }} + onSubmit={mockedSubmit} + > + {({ form }) => ( + <FormField> + <FormFieldLabel>File</FormFieldLabel> + <FormFieldController + type="upload" + name="file" + control={form.control} + placeholder="Upload" + /> + </FormField> + )} + </FormMocked> + ); + + const input = screen.getByLabelText<HTMLInputElement>('Upload'); + await user.upload(input, invalidFile); + await user.click(screen.getByRole('button', { name: 'Submit' })); + expect(mockedSubmit).not.toHaveBeenCalled(); +});README.md (1)
28-28
: Clear documentation of S3 requirementsThe README changes effectively communicate the new S3 requirements and setup instructions.
Also applies to: 56-56, 65-66
src/lib/s3/client.ts (1)
7-40
: Enhance error handling, security, and performanceThe current implementation works but could be improved with better error handling, file validation, and upload progress tracking.
Consider implementing:
- File size and type validation before upload
- Upload progress tracking
- More specific error messages for different failure scenarios
export const uploadFile = async (params: { file: File; trpcClient: ReturnType<typeof trpc.useUtils>['client']; collection: RouterInputs['files']['uploadPresignedUrl']['collection']; metadata?: Record<string, string>; onError?: (file: File, error: unknown) => void; + onProgress?: (progress: number) => void; }) => { try { + // Validate file size and type + const maxSize = 5 * 1024 * 1024; // 5MB + if (params.file.size > maxSize) { + throw new Error('File size exceeds maximum limit'); + } + const presignedUrlOutput = await params.trpcClient.files.uploadPresignedUrl.mutate({ // Metadata is a Record<string, string> but should be serialized for trpc-openapi metadata: params.metadata ? stringify(params.metadata) : undefined, collection: params.collection, type: params.file.type, size: params.file.size, name: params.file.name, }); - const response = await fetch(presignedUrlOutput.signedUrl, { - method: 'PUT', - headers: { 'Content-Type': params.file.type }, - body: params.file, - }); - - if (!response.ok) { - throw new Error('Failed to upload file'); - } + const xhr = new XMLHttpRequest(); + xhr.upload.addEventListener('progress', (event) => { + if (event.lengthComputable) { + params.onProgress?.(Math.round((event.loaded / event.total) * 100)); + } + }); + + await new Promise<void>((resolve, reject) => { + xhr.open('PUT', presignedUrlOutput.signedUrl); + xhr.setRequestHeader('Content-Type', params.file.type); + xhr.onload = () => { + if (xhr.status === 200) { + resolve(); + } else { + reject(new Error(`Upload failed with status ${xhr.status}`)); + } + }; + xhr.onerror = () => reject(new Error('Network error during upload')); + xhr.send(params.file); + }); return presignedUrlOutput.key; } catch (error) { params.onError?.(params.file, error); - throw error; + throw error instanceof Error ? error : new Error('Failed to upload file'); } };src/features/account/AccountProfileForm.tsx (3)
37-54
: Well implemented mutation function with proper file handling.The mutation function correctly handles file uploads by:
- Using conditional logic to check if a file exists before attempting upload
- Properly passing the collection name and error handling callbacks
- Falling back to the existing image URL if no new file is provided
The implementation is robust and follows good practices.
93-106
: Great integration of the avatar upload field.The avatar upload field is properly integrated with:
- Appropriate label and placeholder
- Configured to accept only allowed file types from configuration
- Connected to the form's control for proper validation
This enhances the user experience while maintaining proper form validation.
86-91
: Simplified form submission handler.The form submission handler has been streamlined to directly call the mutation with form values, which is much cleaner than the previous implementation.
/bin/sh -c " | ||
/usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD; | ||
/usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY; | ||
/usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID; | ||
/usr/bin/mc mb myminio/$S3_BUCKET_NAME; | ||
/usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME; | ||
exit 0; | ||
" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation and error handling to createbuckets service.
The current shell script doesn't check for errors or validate that operations have succeeded, which could lead to silent failures.
Add error checking to ensure each command completes successfully:
entrypoint: >
/bin/sh -c "
- /usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD;
- /usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY;
- /usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID;
- /usr/bin/mc mb myminio/$S3_BUCKET_NAME;
- /usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME;
- exit 0;
+ /usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD || exit 1;
+ /usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY 2>/dev/null || echo 'User already exists';
+ /usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID || exit 1;
+ /usr/bin/mc mb myminio/$S3_BUCKET_NAME 2>/dev/null || echo 'Bucket already exists';
+ /usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME || exit 1;
+ echo 'S3 configuration completed successfully';
"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
/bin/sh -c " | |
/usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD; | |
/usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY; | |
/usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID; | |
/usr/bin/mc mb myminio/$S3_BUCKET_NAME; | |
/usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME; | |
exit 0; | |
" | |
entrypoint: > | |
/bin/sh -c " | |
/usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD || exit 1; | |
/usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY 2>/dev/null || echo 'User already exists'; | |
/usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID || exit 1; | |
/usr/bin/mc mb myminio/$S3_BUCKET_NAME 2>/dev/null || echo 'Bucket already exists'; | |
/usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME || exit 1; | |
echo 'S3 configuration completed successfully'; | |
" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
docker-compose.yml (2)
28-38
:⚠️ Potential issueMinIO Service: Command Environment Variable Consistency
The MinIO service’s command line uses${FORWARD_MINIO_CONSOLE_PORT:-9090}
while the port mapping uses${DOCKER_FORWARD_MINIO_CONSOLE_PORT:-9090}
. To avoid unexpected behavior, update the command to use the consistent environment variable name.- command: minio server /data/minio --console-address ":${FORWARD_MINIO_CONSOLE_PORT:-9090}" + command: minio server /data/minio --console-address ":${DOCKER_FORWARD_MINIO_CONSOLE_PORT:-9090}"
45-63
: 🛠️ Refactor suggestionCreatebuckets Service: Enhance Error Handling
Thecreatebuckets
service currently executes several MinIO client commands sequentially without error handling. To prevent silent failures during alias setup, user creation, policy attachment, bucket creation, and public access configuration, consider adding error checking (e.g., using|| exit 1
or logging appropriate messages).Below is a diff suggestion to improve error handling:
- /usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD; - /usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY; - /usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID; - /usr/bin/mc mb myminio/$S3_BUCKET_NAME; - /usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME; + /usr/bin/mc alias set myminio http://minio:${DOCKER_FORWARD_MINIO_PORT:-9000} $DOCKER_MINIO_USERNAME $DOCKER_MINIO_PASSWORD || exit 1; + /usr/bin/mc admin user add myminio $S3_ACCESS_KEY_ID $S3_SECRET_ACCESS_KEY 2>/dev/null || echo "User already exists"; + /usr/bin/mc admin policy attach myminio readwrite --user $S3_ACCESS_KEY_ID || exit 1; + /usr/bin/mc mb myminio/$S3_BUCKET_NAME 2>/dev/null || echo "Bucket already exists"; + /usr/bin/mc anonymous set download myminio/$S3_BUCKET_NAME || exit 1;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docker-compose.yml
(1 hunks)
🔇 Additional comments (4)
docker-compose.yml (4)
4-5
: Postgres Environment File Usage
Using theenv_file
configuration improves manageability by centralizing environment variables. Please ensure the referenced.env
file is maintained securely (e.g., added to.dockerignore
if it contains sensitive data).
18-27
: Database Initialization Service Configuration
The newinitializedatabase
service is configured to depend on a healthy Postgres service and sets a properDATABASE_URL
. Please verify that the build context anddocker/initialize-database.dockerfile
exist and function as intended.
39-43
: MinIO Healthcheck Configuration
The added healthcheck usingmc ready local
is a good addition to ensure service readiness. Verify thatmc
is available in the service’s context at runtime.
66-68
: MinIO Volume Declaration
The volume definition for MinIO uses the local driver, which is appropriate for persistent storage. Ensure this aligns with your data retention and backup strategies.
healthcheck: | ||
test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_NAME'] | ||
interval: 10s | ||
timeout: 5s | ||
retries: 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Postgres Healthcheck Variable Mismatch
The healthcheck command currently uses $DOCKER_DATABASE_NAME
for the -U
flag, which expects a username. It should likely be $DOCKER_DATABASE_USERNAME
to correctly check connectivity.
- test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_NAME']
+ test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_USERNAME']
Critical Issue: Postgres Healthcheck Environment Variable
The healthcheck command in docker-compose.yml
is incorrectly using $DOCKER_DATABASE_NAME
for the -U
flag, which requires a username. This should be replaced with $DOCKER_DATABASE_USERNAME
to ensure that the healthcheck correctly verifies connectivity.
- Location: docker-compose.yml, Lines 12-16
- Suggested Change:
- test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_NAME'] + test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_USERNAME']
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
healthcheck: | |
test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_NAME'] | |
interval: 10s | |
timeout: 5s | |
retries: 5 | |
healthcheck: | |
test: ['CMD-SHELL', 'pg_isready -U $DOCKER_DATABASE_USERNAME'] | |
interval: 10s | |
timeout: 5s | |
retries: 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
src/lib/s3/client.ts (2)
1-3
: Avoid importing entire modules when only types are used.Currently, imports on lines 1-3 include non-type references. If you only need type-level declarations from
trpc
, consider usingimport type
to reduce bundling overhead.
5-11
: Enable optional progress tracking and file constraints.While the function signature is well structured, consider allowing an additional callback (e.g.,
onProgress
) and a file-size/type check to enhance user experience and security.src/lib/s3/schemas.ts (3)
4-11
: Organize internal vs. external imports.Grouping external libraries (
zod
,i18next
) and internal modules (config
,utils
, etc.) can enhance maintainability.
12-19
: Consider stricter size constraints inzFieldMetadata
.
size: z.coerce.number().nullish()
doesn’t impose a max limit. Defining an upper bound could prevent excessively large metadata from being accepted.
51-58
: ValidatesignedUrl
as a proper URL.Using
z.string()
forsignedUrl
is correct but unrestrictive. If feasible, you can add.url()
to confirm it’s a valid URL.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/lib/s3/client.ts
(1 hunks)src/lib/s3/schemas.ts
(1 hunks)src/server/routers/files.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/server/routers/files.ts
🔇 Additional comments (4)
src/lib/s3/client.ts (1)
39-44
: Add URL sanitization for public file URLs.This is a repeat of a past recommendation. Directly concatenating the key without sanitizing risks invalid or unsafe URLs.
-export const getFilePublicUrl = (key: string | null | undefined) => { - if (!key) { - return undefined; - } - return `${env.NEXT_PUBLIC_S3_BUCKET_PUBLIC_URL}/${key}`; -}; +export const getFilePublicUrl = (key: string | null | undefined) => { + if (!key) { + return undefined; + } + const sanitizedKey = encodeURIComponent(key.replace(/^\/+/, '')); + return `${env.NEXT_PUBLIC_S3_BUCKET_PUBLIC_URL}/${sanitizedKey}`; +};src/lib/s3/schemas.ts (3)
1-3
: Ensure i18next configuration availability.Using
t
from'i18next'
requires correct i18n setup. Confirm that translations and language contexts are properly loaded and available in all runtime environments.
21-38
: Wrap or handle translation fallback in.superRefine
.The call to
t
may fail or return an empty string if translations are missing keys. Ensure the fallback is handled or the missing key is easily identifiable in logs.
40-50
: Adopt security-focused validation rules for file uploads.As noted in past reviews, limiting metadata size and enforcing a strict MIME pattern helps mitigate security risks.
export const zUploadSignedUrlInput = () => z.object({ - metadata: z.record(z.string(), z.string()).optional(), + metadata: z.record(z.string(), z.string().max(1024)).optional(), name: z.string().max(255), - type: z.string(), + type: z.string().regex(/^[a-zA-Z]+\/[-+.a-zA-Z0-9]+$/), size: z.number().min(1).max(5 * 1024 * 1024), collection: zFilesCollectionName(), });
try { | ||
const presignedUrlOutput = | ||
await params.trpcClient.files.uploadPresignedUrl.mutate({ | ||
metadata: params.metadata, | ||
collection: params.collection, | ||
type: params.file.type, | ||
size: params.file.size, | ||
name: params.file.name, | ||
}); | ||
|
||
const response = await fetch(presignedUrlOutput.signedUrl, { | ||
method: 'PUT', | ||
headers: { 'Content-Type': params.file.type }, | ||
body: params.file, | ||
}); | ||
|
||
if (!response.ok) { | ||
throw new Error('Failed to upload file'); | ||
} | ||
|
||
return presignedUrlOutput.key; | ||
} catch (error) { | ||
params.onError?.(params.file, error); | ||
throw error; | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error diagnostics in file uploads.
At line 29, the error thrown is generic. Incorporating response details (like status code and body) can simplify troubleshooting. Including these details in the thrown error or logging them before rethrowing can greatly assist with debugging.
try {
...
if (!response.ok) {
- throw new Error('Failed to upload file');
+ throw new Error(`Failed to upload file. Status: ${response.status} - ${response.statusText}`);
}
...
} catch (error) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
try { | |
const presignedUrlOutput = | |
await params.trpcClient.files.uploadPresignedUrl.mutate({ | |
metadata: params.metadata, | |
collection: params.collection, | |
type: params.file.type, | |
size: params.file.size, | |
name: params.file.name, | |
}); | |
const response = await fetch(presignedUrlOutput.signedUrl, { | |
method: 'PUT', | |
headers: { 'Content-Type': params.file.type }, | |
body: params.file, | |
}); | |
if (!response.ok) { | |
throw new Error('Failed to upload file'); | |
} | |
return presignedUrlOutput.key; | |
} catch (error) { | |
params.onError?.(params.file, error); | |
throw error; | |
} | |
}; | |
try { | |
const presignedUrlOutput = | |
await params.trpcClient.files.uploadPresignedUrl.mutate({ | |
metadata: params.metadata, | |
collection: params.collection, | |
type: params.file.type, | |
size: params.file.size, | |
name: params.file.name, | |
}); | |
const response = await fetch(presignedUrlOutput.signedUrl, { | |
method: 'PUT', | |
headers: { 'Content-Type': params.file.type }, | |
body: params.file, | |
}); | |
if (!response.ok) { | |
throw new Error(`Failed to upload file. Status: ${response.status} - ${response.statusText}`); | |
} | |
return presignedUrlOutput.key; | |
} catch (error) { | |
params.onError?.(params.file, error); | |
throw error; | |
} | |
}; |
07d3abb
to
602d924
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/lib/s3/client.ts (1)
5-37
: 🛠️ Refactor suggestionEnhance error handling, security, and performance.
The current implementation lacks file validation, progress tracking, and robust error handling.
Consider implementing:
- File size and type validation before upload
- Upload progress tracking using XMLHttpRequest
- More specific error handling for different failure scenarios
For a detailed implementation, please refer to the previous review comment that provided a comprehensive diff.
🧹 Nitpick comments (3)
src/lib/s3/client.ts (1)
33-36
: Error is rethrown without transformation.When catching and rethrowing an error, consider transforming it to include more context about the operation that failed, especially if this error might bubble up to user-facing components.
} catch (error) { params.onError?.(params.file, error); - throw error; + throw error instanceof Error + ? error + : new Error('Failed to upload file to storage service'); }src/lib/s3/schemas.ts (2)
13-19
: Consider adding stricter constraints forname
andtype
.
zu.string.nonEmptyNullish(z.string())
ensures the value is not empty or nullish, which is a good start. However, adding a maximum length to prevent excessively large strings would be beneficial.export const zFieldMetadata = () => z.object({ lastModifiedDate: z.date().optional(), - name: zu.string.nonEmptyNullish(z.string()), + name: zu.string.nonEmptyNullish(z.string().max(255)), size: z.coerce.number().nullish(), - type: zu.string.nonEmptyNullish(z.string()), + type: zu.string.nonEmptyNullish(z.string().max(255)), });
50-52
: Enforce MIME type constraints or rely onvalidateFile
.
Currently,type
is a free-form string. IfvalidateFile({ input, config })
does not enforce MIME type checks, add them here to minimize invalid file types.type: z - .string(), + .string() + .regex(/^[a-zA-Z0-9]+\/[a-zA-Z0-9.\-+]+$/, 'Invalid MIME type'), size: z.number(),
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/lib/s3/client.ts
(1 hunks)src/lib/s3/schemas.ts
(1 hunks)src/server/routers/files.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/server/routers/files.ts
🔇 Additional comments (8)
src/lib/s3/client.ts (4)
5-11
: Function signature looks good for basic file uploads.The function parameters provide a clear interface for handling file uploads with appropriate types. Consider adding an optional
onProgress
parameter to track upload progress in the future.
28-30
: Improve error diagnostics in file uploads.The error thrown is generic, making troubleshooting difficult. Include response details like status code and body text for better diagnostics.
if (!response.ok) { - throw new Error('Failed to upload file'); + throw new Error(`Failed to upload file. Status: ${response.status} - ${response.statusText}`); }
39-44
: Add URL sanitization for public file URLs.The current implementation directly concatenates the key with the public URL without sanitization.
export const getFilePublicUrl = (key: string | null | undefined) => { if (!key) { return undefined; } - return `${env.NEXT_PUBLIC_S3_BUCKET_PUBLIC_URL}/${key}`; + const sanitizedKey = encodeURIComponent(key.replace(/^\/+/, '')); + return `${env.NEXT_PUBLIC_S3_BUCKET_PUBLIC_URL}/${sanitizedKey}`; };
15-15
:❓ Verification inconclusive
No metadata transformation applied.
The metadata is passed directly without any transformation. If your backend expects metadata to be stringified, you should apply appropriate transformation here.
🏁 Script executed:
#!/bin/bash # Check if metadata is transformed elsewhere in the codebase rg -A 3 "metadata.*stringify" --type tsLength of output: 41
Action Required: Validate Metadata Transformation
Our initial search for stringification (using the
rg -A 3 "metadata.*stringify" --type ts
command) produced no matches. This indicates that there isn't a centralized transformation (e.g. usingJSON.stringify
) applied toparams.metadata
elsewhere in the codebase. Please manually verify that passing raw metadata is acceptable, or adjust the code accordingly if the backend expects a transformed (stringified) version.
- Ensure the backend can accept the metadata in its raw form.
- If a transformation is necessary, apply an appropriate conversion (such as
JSON.stringify(params.metadata)
).src/lib/s3/schemas.ts (4)
1-2
: Imports look good.
The imports fromi18next
andzod
are consistent with the rest of the file. No concerns here.
22-38
: Validate compatibility of the globalFile
constructor.
Usingz.instanceof(File)
may cause environment-specific issues if this code is used outside the browser (e.g., in Node.js). Ensure that theFile
constructor is available in every environment where this schema is used.Would you like me to help investigate references to this code in the backend or run a script to detect potential environment discrepancies?
46-49
: Good path traversal protection.
Enforcing^[^/\\]*$
and capping length at 255 characters helps mitigate name-based security risks like directory traversal.
57-61
: Output schema is concise and clear.
The fieldskey
andsignedUrl
are well-defined. No immediate concerns here.
>; | ||
export const zUploadSignedUrlInput = () => | ||
z.object({ | ||
metadata: z.record(z.string(), z.string()).optional(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Limit the size of metadata keys and values.
z.record(z.string(), z.string()).optional()
does not restrict how large the metadata object can become. Consider capping the key and value lengths to prevent abuse or memory overruns.
metadata: z
- .record(z.string(), z.string())
+ .record(
+ z.string().max(50), // e.g. limit key length
+ z.string().max(1024), // e.g. limit value length
+ )
.optional(),
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
metadata: z.record(z.string(), z.string()).optional(), | |
metadata: z | |
.record( | |
z.string().max(50), // e.g. limit key length | |
z.string().max(1024), // e.g. limit value length | |
) | |
.optional(), |
Describe your changes
closes #<issue_id>
Simplified version of the Avatar Upload process :

Notes :
key
of the fileLeft to do / Lines of thought
Screenshots
Documentation
Checklist
pnpm storybook
command and everything is workingSummary by CodeRabbit
ImageUpload
component facilitates image file uploads with improved user interaction.FieldUpload
andFieldUploadPreview
components enhance file upload functionality within forms.validateFile
utility function improves error handling for file uploads.