I had a similar problem about a year ago, The parent table had about
1.5B rows each with a unique ID from a bigserial. My approach was to
create all the child tables needed for the past and the next month or
so. Then, I simple did something like:
begin;
insert into table select * from only table where id between 1 and 10000000;
delete from only table where id between 1 and 10000000;
-- first few times check to make sure it's working of course
commit;
begin;
insert into table select * from only table where id between 10000001
and 20000000;
delete from only table where id between 10000001 and 20000000;
commit;
and so on. New entries were already going into the child tables as
they showed up, old entries were migrating 10M rows at a time. This
kept the moves small enough so as not to run the machine out of any
resource involved in moving 1.5B rows at once.