How can I handle a Python training array that's too large?

So I have over 30 million objects that I need to use as my training data. My Issue is simple: When I create my training array by an iterative process, at a certain threshold, the list becomes too large and python gets killed. What is a way to get around this? I have been trying to figure this out for hours and keep coming up short!