You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create a Job following the implementation of the Epic
This Job has to use the new BufferedCvsReader to extract the content data from a file to avoid memory issues
It is desirable to be able to start or restart the import from a given row number skipping all previous
The new Job can be instructed to perform a db commit after n rows are saved.
The new method should return an Immutable ImportSumary class instead of a HashMap reporting the results.
Consume a single class with all the required parameters instead of taking a large number of arguments. Right now it takes 15 parameters. When the maximum number of allowed params should be 7. Any private methods created here have to meet these requirements too.
Optionally we can Refactor the method ImportUtil.importFile to reduce its complexity and make it clearer to read and understand. Currently, it is 115 lines and the recommended is 15
Proposed Objective
Core Features
Proposed Priority
Priority 2 - Important
Acceptance Criteria
The Job should be able to import successfully a large number of content items. e.g. 10K items.
The Job should never get stuck or run out of memory.
The same options passed to the Structs Action (preview, import, fields, and key selections should be accepted by this job) Since the functionality should remain as it is now.
External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
No response
Assumptions & Initiation Needs
No response
Quality Assurance Notes & Workarounds
No response
Sub-Tasks & Estimates
No response
The text was updated successfully, but these errors were encountered:
fabrizzio-dotCMS
changed the title
Create a Job with the relevant logic from ImportUtil and ImportContentletsAction to successfully imports large files
Create a Job with the relevant logic from ImportUtil and ImportContentletsAction to successfully import large files
Aug 7, 2024
Added a LongConsumer progress callback to provide real-time progress updates during CSV import. Finalized ImportContentletProcessor implementation to support content imports, progress tracking, cancellation, and proper error handling.
Parent Issue
#29482
Task
Create a Job following the implementation of the Epic
This Job has to use the new BufferedCvsReader to extract the content data from a file to avoid memory issues
It is desirable to be able to start or restart the import from a given row number skipping all previous
The new Job can be instructed to perform a db commit after n rows are saved.
The new method should return an Immutable ImportSumary class instead of a HashMap reporting the results.
Consume a single class with all the required parameters instead of taking a large number of arguments. Right now it takes 15 parameters. When the maximum number of allowed params should be 7. Any private methods created here have to meet these requirements too.
Optionally we can Refactor the method ImportUtil.importFile to reduce its complexity and make it clearer to read and understand. Currently, it is 115 lines and the recommended is 15
Proposed Objective
Core Features
Proposed Priority
Priority 2 - Important
Acceptance Criteria
External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
No response
Assumptions & Initiation Needs
No response
Quality Assurance Notes & Workarounds
No response
Sub-Tasks & Estimates
No response
The text was updated successfully, but these errors were encountered: