So I have over 30 million objects that I need to use as my training data. My Issue is simple: When I create my training array by an iterative process, at a certain threshold, the list becomes too large and python gets killed. What is a way to get around this? I have been trying to figure this out for hours and keep coming up short!
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| GraphQL Bulk Operation mutation file size limit | 7 | 180 | May 6, 2024 | |
| How to import a large CSV file with products? | 1 | 51 | March 22, 2023 | |
| Message: Maximal Memory for images reached | 0 | 3 | April 16, 2024 | |
| How can I upload a large JSONL file using graphql stagedUploadsCreate mutation? | 1 | 79 | February 16, 2023 | |
| Workaround needed on max. 128 elements per metafield | 1 | 51 | June 20, 2025 |